Test Report: Docker_Linux_crio_arm64 21738

                    
                      0f64f31b8846d8060cae128a3e5be9cc35c08ea3:2025-10-16:41932
                    
                

Test fail (39/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.6
35 TestAddons/parallel/Registry 16.07
36 TestAddons/parallel/RegistryCreds 0.54
37 TestAddons/parallel/Ingress 144.35
38 TestAddons/parallel/InspektorGadget 6.28
39 TestAddons/parallel/MetricsServer 5.37
41 TestAddons/parallel/CSI 44.28
42 TestAddons/parallel/Headlamp 3.23
43 TestAddons/parallel/CloudSpanner 6.3
44 TestAddons/parallel/LocalPath 10.65
45 TestAddons/parallel/NvidiaDevicePlugin 6.38
46 TestAddons/parallel/Yakd 6.28
98 TestFunctional/parallel/ServiceCmdConnect 603.51
126 TestFunctional/parallel/ServiceCmd/DeployApp 600.92
137 TestFunctional/parallel/ServiceCmd/HTTPS 0.6
138 TestFunctional/parallel/ServiceCmd/Format 0.45
145 TestFunctional/parallel/ServiceCmd/URL 0.46
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.24
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.1
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.46
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.46
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.24
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.48
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 537.75
175 TestMultiControlPlane/serial/DeleteSecondaryNode 8.67
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 3.17
191 TestJSONOutput/pause/Command 1.95
197 TestJSONOutput/unpause/Command 1.89
281 TestPause/serial/Pause 7.87
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.51
303 TestStartStop/group/old-k8s-version/serial/Pause 8.51
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.61
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.06
321 TestStartStop/group/no-preload/serial/Pause 6.42
327 TestStartStop/group/embed-certs/serial/Pause 7.57
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.38
338 TestStartStop/group/newest-cni/serial/Pause 6.38
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.02
350 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.12
x
+
TestAddons/serial/Volcano (0.6s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-303264 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-303264 addons disable volcano --alsologtostderr -v=1: exit status 11 (601.552286ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:35:07.362365  297046 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:35:07.363171  297046 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:35:07.363185  297046 out.go:374] Setting ErrFile to fd 2...
	I1016 18:35:07.363191  297046 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:35:07.363464  297046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:35:07.363756  297046 mustload.go:65] Loading cluster: addons-303264
	I1016 18:35:07.364185  297046 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:35:07.364203  297046 addons.go:606] checking whether the cluster is paused
	I1016 18:35:07.364309  297046 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:35:07.364331  297046 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:35:07.364790  297046 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:35:07.382205  297046 ssh_runner.go:195] Run: systemctl --version
	I1016 18:35:07.382265  297046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:35:07.403370  297046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:35:07.507645  297046 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:35:07.507793  297046 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:35:07.537054  297046 cri.go:89] found id: "4c854724ff606415b7b4cd338cfdcba4ae2ed9545913afdbe77836a55f682630"
	I1016 18:35:07.537086  297046 cri.go:89] found id: "72c450061ca944aebcf21ba44cd0fb5c6faba231d5c3510d405f852f8c576446"
	I1016 18:35:07.537091  297046 cri.go:89] found id: "d3c44cd5669c90a23e68ca072b42ce384a3f474528fe2c9af093fd29c7c3fa1b"
	I1016 18:35:07.537095  297046 cri.go:89] found id: "2fd75860dad3eccbd0d79a17732d30758bd9d2456a835178445c635cbb925a8a"
	I1016 18:35:07.537099  297046 cri.go:89] found id: "b85fa5b248e27a71c1f12a3be974d1bdda3b4469c81daef49b7cfde0ffea797c"
	I1016 18:35:07.537103  297046 cri.go:89] found id: "817135be1fb1204992d3db557da6db2ccace5f73a469e16e6ef4a8d3a6538646"
	I1016 18:35:07.537106  297046 cri.go:89] found id: "cc0546bd9d12ac9715ff397c9b06b4fc5d1b8028491ba478a088e6e88b40010f"
	I1016 18:35:07.537109  297046 cri.go:89] found id: "83e350274adee6aabe6699937b3ee1da677b23930fb3f6a320244186014dc182"
	I1016 18:35:07.537113  297046 cri.go:89] found id: "4d4a9d8e6117902f1f0822f15f29b21a249dfee058117ef45732ff0ebbc9b63c"
	I1016 18:35:07.537123  297046 cri.go:89] found id: "54a940e28a47407c8dd3c7ff37cedcc6661f35e7010edab0a32f554dcebca95e"
	I1016 18:35:07.537130  297046 cri.go:89] found id: "ddb9eebdec6b1a8e687257395e11e928406b35550fba6ed6e91af596e7585f32"
	I1016 18:35:07.537161  297046 cri.go:89] found id: "42b57482939e2fd5f76685af64bbdfb293bceb35482b2bdc733c1573a63ac270"
	I1016 18:35:07.537167  297046 cri.go:89] found id: "a1df688b216b826cd54cb112e3dad71b1e97ae8c966ef26ed5c8ef3dd4b29aaa"
	I1016 18:35:07.537171  297046 cri.go:89] found id: "8049d0179c2ce30d32ea7f0beab524406581715f6d4f201e8e1f342170d48791"
	I1016 18:35:07.537175  297046 cri.go:89] found id: "2f9a34f263e49dc31cf9dc01ff9a56ba8c02307a08be02085e5ebc86366593ef"
	I1016 18:35:07.537185  297046 cri.go:89] found id: "a11803eed98f15ecf4cde77e7c2e9a9c4a51e24bf968cd172db10b9cb9173b34"
	I1016 18:35:07.537190  297046 cri.go:89] found id: "2150dbabd80c70b27e2ffa366b6a76822ac0da6532eef17cae4daccd51271b0b"
	I1016 18:35:07.537195  297046 cri.go:89] found id: "a43557a0c460383dd11dbc546a8b05c541e5a54ece4dec48717534f0976d5b55"
	I1016 18:35:07.537199  297046 cri.go:89] found id: "3478855350e27312631cd476f6eb2db3e964996f54f9f6f384b530804abbc3ad"
	I1016 18:35:07.537202  297046 cri.go:89] found id: "2f7b424d8bee40bd1f116496f34f26e561c275a27e0ae071483edcb822d76d67"
	I1016 18:35:07.537207  297046 cri.go:89] found id: "060c04d69de0bc184bc8f947999dbdc731a26bde67d27b5ccc7d12c5160d6872"
	I1016 18:35:07.537210  297046 cri.go:89] found id: "b9c25f79f72e12553a80f8e56a83533f0c92695295a4c2fefe60d0d43ea83f8c"
	I1016 18:35:07.537213  297046 cri.go:89] found id: "014826c0f016dd10054a3e938e96ca2dc16e3da7c51ac716d64785bc10883c23"
	I1016 18:35:07.537215  297046 cri.go:89] found id: ""
	I1016 18:35:07.537274  297046 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:35:07.552748  297046 out.go:203] 
	W1016 18:35:07.555774  297046 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:35:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:35:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:35:07.555811  297046 out.go:285] * 
	* 
	W1016 18:35:07.878528  297046 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:35:07.881531  297046 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-303264 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.60s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.603888ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-tt65k" [25f718b4-be75-437f-a793-49619e3a4306] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004082078s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-jktvf" [e60cff58-6e3a-4e66-90e2-ebcb83be567a] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003685029s
addons_test.go:392: (dbg) Run:  kubectl --context addons-303264 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-303264 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-303264 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.410625762s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-303264 ip
2025/10/16 18:35:34 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-303264 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-303264 addons disable registry --alsologtostderr -v=1: exit status 11 (330.079042ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:35:34.941036  297988 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:35:34.941903  297988 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:35:34.941924  297988 out.go:374] Setting ErrFile to fd 2...
	I1016 18:35:34.941930  297988 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:35:34.942348  297988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:35:34.942727  297988 mustload.go:65] Loading cluster: addons-303264
	I1016 18:35:34.943413  297988 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:35:34.943432  297988 addons.go:606] checking whether the cluster is paused
	I1016 18:35:34.943571  297988 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:35:34.943594  297988 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:35:34.944289  297988 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:35:34.970032  297988 ssh_runner.go:195] Run: systemctl --version
	I1016 18:35:34.970130  297988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:35:34.995003  297988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:35:35.105401  297988 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:35:35.105516  297988 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:35:35.160256  297988 cri.go:89] found id: "4c854724ff606415b7b4cd338cfdcba4ae2ed9545913afdbe77836a55f682630"
	I1016 18:35:35.160276  297988 cri.go:89] found id: "72c450061ca944aebcf21ba44cd0fb5c6faba231d5c3510d405f852f8c576446"
	I1016 18:35:35.160281  297988 cri.go:89] found id: "d3c44cd5669c90a23e68ca072b42ce384a3f474528fe2c9af093fd29c7c3fa1b"
	I1016 18:35:35.160286  297988 cri.go:89] found id: "2fd75860dad3eccbd0d79a17732d30758bd9d2456a835178445c635cbb925a8a"
	I1016 18:35:35.160289  297988 cri.go:89] found id: "b85fa5b248e27a71c1f12a3be974d1bdda3b4469c81daef49b7cfde0ffea797c"
	I1016 18:35:35.160293  297988 cri.go:89] found id: "817135be1fb1204992d3db557da6db2ccace5f73a469e16e6ef4a8d3a6538646"
	I1016 18:35:35.160296  297988 cri.go:89] found id: "cc0546bd9d12ac9715ff397c9b06b4fc5d1b8028491ba478a088e6e88b40010f"
	I1016 18:35:35.160299  297988 cri.go:89] found id: "83e350274adee6aabe6699937b3ee1da677b23930fb3f6a320244186014dc182"
	I1016 18:35:35.160302  297988 cri.go:89] found id: "4d4a9d8e6117902f1f0822f15f29b21a249dfee058117ef45732ff0ebbc9b63c"
	I1016 18:35:35.160313  297988 cri.go:89] found id: "54a940e28a47407c8dd3c7ff37cedcc6661f35e7010edab0a32f554dcebca95e"
	I1016 18:35:35.160325  297988 cri.go:89] found id: "ddb9eebdec6b1a8e687257395e11e928406b35550fba6ed6e91af596e7585f32"
	I1016 18:35:35.160329  297988 cri.go:89] found id: "42b57482939e2fd5f76685af64bbdfb293bceb35482b2bdc733c1573a63ac270"
	I1016 18:35:35.160332  297988 cri.go:89] found id: "a1df688b216b826cd54cb112e3dad71b1e97ae8c966ef26ed5c8ef3dd4b29aaa"
	I1016 18:35:35.160335  297988 cri.go:89] found id: "8049d0179c2ce30d32ea7f0beab524406581715f6d4f201e8e1f342170d48791"
	I1016 18:35:35.160339  297988 cri.go:89] found id: "2f9a34f263e49dc31cf9dc01ff9a56ba8c02307a08be02085e5ebc86366593ef"
	I1016 18:35:35.160343  297988 cri.go:89] found id: "a11803eed98f15ecf4cde77e7c2e9a9c4a51e24bf968cd172db10b9cb9173b34"
	I1016 18:35:35.160346  297988 cri.go:89] found id: "2150dbabd80c70b27e2ffa366b6a76822ac0da6532eef17cae4daccd51271b0b"
	I1016 18:35:35.160351  297988 cri.go:89] found id: "a43557a0c460383dd11dbc546a8b05c541e5a54ece4dec48717534f0976d5b55"
	I1016 18:35:35.160355  297988 cri.go:89] found id: "3478855350e27312631cd476f6eb2db3e964996f54f9f6f384b530804abbc3ad"
	I1016 18:35:35.160358  297988 cri.go:89] found id: "2f7b424d8bee40bd1f116496f34f26e561c275a27e0ae071483edcb822d76d67"
	I1016 18:35:35.160362  297988 cri.go:89] found id: "060c04d69de0bc184bc8f947999dbdc731a26bde67d27b5ccc7d12c5160d6872"
	I1016 18:35:35.160369  297988 cri.go:89] found id: "b9c25f79f72e12553a80f8e56a83533f0c92695295a4c2fefe60d0d43ea83f8c"
	I1016 18:35:35.160372  297988 cri.go:89] found id: "014826c0f016dd10054a3e938e96ca2dc16e3da7c51ac716d64785bc10883c23"
	I1016 18:35:35.160375  297988 cri.go:89] found id: ""
	I1016 18:35:35.160431  297988 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:35:35.178746  297988 out.go:203] 
	W1016 18:35:35.181802  297988 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:35:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:35:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:35:35.181838  297988 out.go:285] * 
	* 
	W1016 18:35:35.188399  297988 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:35:35.191557  297988 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-303264 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.07s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.54s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.90983ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-303264
addons_test.go:332: (dbg) Run:  kubectl --context addons-303264 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-303264 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-303264 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (262.053ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:36:36.555721  299631 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:36:36.556516  299631 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:36:36.556533  299631 out.go:374] Setting ErrFile to fd 2...
	I1016 18:36:36.556539  299631 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:36:36.556851  299631 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:36:36.557190  299631 mustload.go:65] Loading cluster: addons-303264
	I1016 18:36:36.557562  299631 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:36:36.557580  299631 addons.go:606] checking whether the cluster is paused
	I1016 18:36:36.557681  299631 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:36:36.557703  299631 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:36:36.558150  299631 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:36:36.586066  299631 ssh_runner.go:195] Run: systemctl --version
	I1016 18:36:36.586124  299631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:36:36.604052  299631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:36:36.707667  299631 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:36:36.707760  299631 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:36:36.737715  299631 cri.go:89] found id: "4c854724ff606415b7b4cd338cfdcba4ae2ed9545913afdbe77836a55f682630"
	I1016 18:36:36.737737  299631 cri.go:89] found id: "72c450061ca944aebcf21ba44cd0fb5c6faba231d5c3510d405f852f8c576446"
	I1016 18:36:36.737742  299631 cri.go:89] found id: "d3c44cd5669c90a23e68ca072b42ce384a3f474528fe2c9af093fd29c7c3fa1b"
	I1016 18:36:36.737751  299631 cri.go:89] found id: "2fd75860dad3eccbd0d79a17732d30758bd9d2456a835178445c635cbb925a8a"
	I1016 18:36:36.737755  299631 cri.go:89] found id: "b85fa5b248e27a71c1f12a3be974d1bdda3b4469c81daef49b7cfde0ffea797c"
	I1016 18:36:36.737759  299631 cri.go:89] found id: "817135be1fb1204992d3db557da6db2ccace5f73a469e16e6ef4a8d3a6538646"
	I1016 18:36:36.737762  299631 cri.go:89] found id: "cc0546bd9d12ac9715ff397c9b06b4fc5d1b8028491ba478a088e6e88b40010f"
	I1016 18:36:36.737764  299631 cri.go:89] found id: "83e350274adee6aabe6699937b3ee1da677b23930fb3f6a320244186014dc182"
	I1016 18:36:36.737767  299631 cri.go:89] found id: "4d4a9d8e6117902f1f0822f15f29b21a249dfee058117ef45732ff0ebbc9b63c"
	I1016 18:36:36.737774  299631 cri.go:89] found id: "54a940e28a47407c8dd3c7ff37cedcc6661f35e7010edab0a32f554dcebca95e"
	I1016 18:36:36.737777  299631 cri.go:89] found id: "ddb9eebdec6b1a8e687257395e11e928406b35550fba6ed6e91af596e7585f32"
	I1016 18:36:36.737780  299631 cri.go:89] found id: "42b57482939e2fd5f76685af64bbdfb293bceb35482b2bdc733c1573a63ac270"
	I1016 18:36:36.737783  299631 cri.go:89] found id: "a1df688b216b826cd54cb112e3dad71b1e97ae8c966ef26ed5c8ef3dd4b29aaa"
	I1016 18:36:36.737787  299631 cri.go:89] found id: "8049d0179c2ce30d32ea7f0beab524406581715f6d4f201e8e1f342170d48791"
	I1016 18:36:36.737796  299631 cri.go:89] found id: "2f9a34f263e49dc31cf9dc01ff9a56ba8c02307a08be02085e5ebc86366593ef"
	I1016 18:36:36.737801  299631 cri.go:89] found id: "a11803eed98f15ecf4cde77e7c2e9a9c4a51e24bf968cd172db10b9cb9173b34"
	I1016 18:36:36.737808  299631 cri.go:89] found id: "2150dbabd80c70b27e2ffa366b6a76822ac0da6532eef17cae4daccd51271b0b"
	I1016 18:36:36.737812  299631 cri.go:89] found id: "a43557a0c460383dd11dbc546a8b05c541e5a54ece4dec48717534f0976d5b55"
	I1016 18:36:36.737815  299631 cri.go:89] found id: "3478855350e27312631cd476f6eb2db3e964996f54f9f6f384b530804abbc3ad"
	I1016 18:36:36.737818  299631 cri.go:89] found id: "2f7b424d8bee40bd1f116496f34f26e561c275a27e0ae071483edcb822d76d67"
	I1016 18:36:36.737823  299631 cri.go:89] found id: "060c04d69de0bc184bc8f947999dbdc731a26bde67d27b5ccc7d12c5160d6872"
	I1016 18:36:36.737828  299631 cri.go:89] found id: "b9c25f79f72e12553a80f8e56a83533f0c92695295a4c2fefe60d0d43ea83f8c"
	I1016 18:36:36.737831  299631 cri.go:89] found id: "014826c0f016dd10054a3e938e96ca2dc16e3da7c51ac716d64785bc10883c23"
	I1016 18:36:36.737835  299631 cri.go:89] found id: ""
	I1016 18:36:36.737890  299631 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:36:36.752548  299631 out.go:203] 
	W1016 18:36:36.755509  299631 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:36:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:36:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:36:36.755532  299631 out.go:285] * 
	* 
	W1016 18:36:36.762090  299631 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:36:36.765042  299631 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-303264 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.54s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-303264 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-303264 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-303264 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [136a2fe8-9ed5-4a92-ab1c-f4f709371cf2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [136a2fe8-9ed5-4a92-ab1c-f4f709371cf2] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004915109s
I1016 18:35:56.502470  290312 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-303264 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-303264 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.452850915s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-303264 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-303264 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-303264
helpers_test.go:243: (dbg) docker inspect addons-303264:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "039913fab7ea195304d0f4d96a7903eec2564008b2f73d8d1f43f3b9fb98e1c2",
	        "Created": "2025-10-16T18:32:33.499079971Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 291461,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:32:33.562059524Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/039913fab7ea195304d0f4d96a7903eec2564008b2f73d8d1f43f3b9fb98e1c2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/039913fab7ea195304d0f4d96a7903eec2564008b2f73d8d1f43f3b9fb98e1c2/hostname",
	        "HostsPath": "/var/lib/docker/containers/039913fab7ea195304d0f4d96a7903eec2564008b2f73d8d1f43f3b9fb98e1c2/hosts",
	        "LogPath": "/var/lib/docker/containers/039913fab7ea195304d0f4d96a7903eec2564008b2f73d8d1f43f3b9fb98e1c2/039913fab7ea195304d0f4d96a7903eec2564008b2f73d8d1f43f3b9fb98e1c2-json.log",
	        "Name": "/addons-303264",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-303264:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-303264",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "039913fab7ea195304d0f4d96a7903eec2564008b2f73d8d1f43f3b9fb98e1c2",
	                "LowerDir": "/var/lib/docker/overlay2/22ef939eac9adf032f7853ad51904cd074603f8031166df8aba3d379e341185a-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/22ef939eac9adf032f7853ad51904cd074603f8031166df8aba3d379e341185a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/22ef939eac9adf032f7853ad51904cd074603f8031166df8aba3d379e341185a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/22ef939eac9adf032f7853ad51904cd074603f8031166df8aba3d379e341185a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-303264",
	                "Source": "/var/lib/docker/volumes/addons-303264/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-303264",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-303264",
	                "name.minikube.sigs.k8s.io": "addons-303264",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c54e14845e91c72ca57667c43e9fd0d59019c21020f58b094484a2f938f1b6c",
	            "SandboxKey": "/var/run/docker/netns/7c54e14845e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-303264": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:70:8d:0d:7f:0a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "04734e091327ec9aae219b7bde6e1d789d28fdf9ff7c1da6401fcd4384794ccf",
	                    "EndpointID": "d5f74218c0ad7ce46275d3ebcc63a8482848f89ab47a52b221be6c8aa3b4559d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-303264",
	                        "039913fab7ea"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-303264 -n addons-303264
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-303264 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-303264 logs -n 25: (1.611406962s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-790969                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-790969 │ jenkins │ v1.37.0 │ 16 Oct 25 18:32 UTC │ 16 Oct 25 18:32 UTC │
	│ start   │ --download-only -p binary-mirror-086561 --alsologtostderr --binary-mirror http://127.0.0.1:41065 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-086561   │ jenkins │ v1.37.0 │ 16 Oct 25 18:32 UTC │                     │
	│ delete  │ -p binary-mirror-086561                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-086561   │ jenkins │ v1.37.0 │ 16 Oct 25 18:32 UTC │ 16 Oct 25 18:32 UTC │
	│ addons  │ disable dashboard -p addons-303264                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:32 UTC │                     │
	│ addons  │ enable dashboard -p addons-303264                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:32 UTC │                     │
	│ start   │ -p addons-303264 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:32 UTC │ 16 Oct 25 18:35 UTC │
	│ addons  │ addons-303264 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:35 UTC │                     │
	│ addons  │ addons-303264 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:35 UTC │                     │
	│ addons  │ enable headlamp -p addons-303264 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:35 UTC │                     │
	│ addons  │ addons-303264 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:35 UTC │                     │
	│ addons  │ addons-303264 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:35 UTC │                     │
	│ ip      │ addons-303264 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:35 UTC │ 16 Oct 25 18:35 UTC │
	│ addons  │ addons-303264 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:35 UTC │                     │
	│ addons  │ addons-303264 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:35 UTC │                     │
	│ addons  │ addons-303264 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:35 UTC │                     │
	│ ssh     │ addons-303264 ssh cat /opt/local-path-provisioner/pvc-7f6b91b3-738c-4521-a1e3-e30bb8ace15b_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:35 UTC │ 16 Oct 25 18:35 UTC │
	│ addons  │ addons-303264 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:35 UTC │                     │
	│ addons  │ addons-303264 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:35 UTC │                     │
	│ ssh     │ addons-303264 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:35 UTC │                     │
	│ addons  │ addons-303264 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:36 UTC │                     │
	│ addons  │ addons-303264 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:36 UTC │                     │
	│ addons  │ addons-303264 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:36 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-303264                                                                                                                                                                                                                                                                                                                                                                                           │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:36 UTC │ 16 Oct 25 18:36 UTC │
	│ addons  │ addons-303264 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:36 UTC │                     │
	│ ip      │ addons-303264 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:38 UTC │ 16 Oct 25 18:38 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:32:07
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:32:07.582538  291068 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:32:07.582653  291068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:32:07.582664  291068 out.go:374] Setting ErrFile to fd 2...
	I1016 18:32:07.582670  291068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:32:07.582909  291068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:32:07.583332  291068 out.go:368] Setting JSON to false
	I1016 18:32:07.584183  291068 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4457,"bootTime":1760635071,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 18:32:07.584251  291068 start.go:141] virtualization:  
	I1016 18:32:07.586034  291068 out.go:179] * [addons-303264] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 18:32:07.587512  291068 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:32:07.587594  291068 notify.go:220] Checking for updates...
	I1016 18:32:07.590401  291068 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:32:07.592235  291068 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:32:07.593405  291068 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 18:32:07.594977  291068 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 18:32:07.596122  291068 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:32:07.597540  291068 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:32:07.618548  291068 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 18:32:07.618678  291068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:32:07.685084  291068 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-16 18:32:07.674852465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:32:07.685227  291068 docker.go:318] overlay module found
	I1016 18:32:07.686634  291068 out.go:179] * Using the docker driver based on user configuration
	I1016 18:32:07.687769  291068 start.go:305] selected driver: docker
	I1016 18:32:07.687795  291068 start.go:925] validating driver "docker" against <nil>
	I1016 18:32:07.687819  291068 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:32:07.688555  291068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:32:07.749277  291068 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-16 18:32:07.739672497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:32:07.749457  291068 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 18:32:07.749716  291068 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:32:07.751040  291068 out.go:179] * Using Docker driver with root privileges
	I1016 18:32:07.752148  291068 cni.go:84] Creating CNI manager for ""
	I1016 18:32:07.752209  291068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:32:07.752222  291068 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1016 18:32:07.752297  291068 start.go:349] cluster config:
	{Name:addons-303264 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-303264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1016 18:32:07.753728  291068 out.go:179] * Starting "addons-303264" primary control-plane node in "addons-303264" cluster
	I1016 18:32:07.755041  291068 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:32:07.756241  291068 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:32:07.757316  291068 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:32:07.757388  291068 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 18:32:07.757401  291068 cache.go:58] Caching tarball of preloaded images
	I1016 18:32:07.757493  291068 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 18:32:07.757507  291068 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:32:07.757836  291068 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/config.json ...
	I1016 18:32:07.757861  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/config.json: {Name:mk3a4acacad842b0d0bcf0e299ebde6b8b609acc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:07.758033  291068 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:32:07.773810  291068 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 to local cache
	I1016 18:32:07.773941  291068 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory
	I1016 18:32:07.773966  291068 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory, skipping pull
	I1016 18:32:07.773972  291068 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in cache, skipping pull
	I1016 18:32:07.773982  291068 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 as a tarball
	I1016 18:32:07.773988  291068 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 from local cache
	I1016 18:32:25.642782  291068 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 from cached tarball
	I1016 18:32:25.642826  291068 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:32:25.642871  291068 start.go:360] acquireMachinesLock for addons-303264: {Name:mke9093fccea664c8560b0ff83054243f330ac14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:32:25.642997  291068 start.go:364] duration metric: took 101.061µs to acquireMachinesLock for "addons-303264"
	I1016 18:32:25.643028  291068 start.go:93] Provisioning new machine with config: &{Name:addons-303264 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-303264 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:32:25.643111  291068 start.go:125] createHost starting for "" (driver="docker")
	I1016 18:32:25.646639  291068 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1016 18:32:25.646903  291068 start.go:159] libmachine.API.Create for "addons-303264" (driver="docker")
	I1016 18:32:25.646950  291068 client.go:168] LocalClient.Create starting
	I1016 18:32:25.647091  291068 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem
	I1016 18:32:25.761415  291068 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem
	I1016 18:32:26.574839  291068 cli_runner.go:164] Run: docker network inspect addons-303264 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1016 18:32:26.591381  291068 cli_runner.go:211] docker network inspect addons-303264 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1016 18:32:26.591501  291068 network_create.go:284] running [docker network inspect addons-303264] to gather additional debugging logs...
	I1016 18:32:26.591526  291068 cli_runner.go:164] Run: docker network inspect addons-303264
	W1016 18:32:26.608664  291068 cli_runner.go:211] docker network inspect addons-303264 returned with exit code 1
	I1016 18:32:26.608699  291068 network_create.go:287] error running [docker network inspect addons-303264]: docker network inspect addons-303264: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-303264 not found
	I1016 18:32:26.608713  291068 network_create.go:289] output of [docker network inspect addons-303264]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-303264 not found
	
	** /stderr **
	I1016 18:32:26.608844  291068 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:32:26.625623  291068 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c3990}
	I1016 18:32:26.625664  291068 network_create.go:124] attempt to create docker network addons-303264 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1016 18:32:26.625719  291068 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-303264 addons-303264
	I1016 18:32:26.690858  291068 network_create.go:108] docker network addons-303264 192.168.49.0/24 created
	I1016 18:32:26.690892  291068 kic.go:121] calculated static IP "192.168.49.2" for the "addons-303264" container
	I1016 18:32:26.690982  291068 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1016 18:32:26.705311  291068 cli_runner.go:164] Run: docker volume create addons-303264 --label name.minikube.sigs.k8s.io=addons-303264 --label created_by.minikube.sigs.k8s.io=true
	I1016 18:32:26.727396  291068 oci.go:103] Successfully created a docker volume addons-303264
	I1016 18:32:26.727495  291068 cli_runner.go:164] Run: docker run --rm --name addons-303264-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-303264 --entrypoint /usr/bin/test -v addons-303264:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1016 18:32:28.937233  291068 cli_runner.go:217] Completed: docker run --rm --name addons-303264-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-303264 --entrypoint /usr/bin/test -v addons-303264:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib: (2.209674317s)
	I1016 18:32:28.937262  291068 oci.go:107] Successfully prepared a docker volume addons-303264
	I1016 18:32:28.937304  291068 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:32:28.937324  291068 kic.go:194] Starting extracting preloaded images to volume ...
	I1016 18:32:28.937385  291068 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-303264:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1016 18:32:33.430703  291068 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-303264:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.493280041s)
	I1016 18:32:33.430734  291068 kic.go:203] duration metric: took 4.493406833s to extract preloaded images to volume ...
	W1016 18:32:33.430886  291068 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1016 18:32:33.431023  291068 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1016 18:32:33.484116  291068 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-303264 --name addons-303264 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-303264 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-303264 --network addons-303264 --ip 192.168.49.2 --volume addons-303264:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1016 18:32:33.781620  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Running}}
	I1016 18:32:33.800381  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:32:33.821222  291068 cli_runner.go:164] Run: docker exec addons-303264 stat /var/lib/dpkg/alternatives/iptables
	I1016 18:32:33.875572  291068 oci.go:144] the created container "addons-303264" has a running status.
	I1016 18:32:33.875607  291068 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa...
	I1016 18:32:34.235581  291068 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1016 18:32:34.258763  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:32:34.283071  291068 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1016 18:32:34.283102  291068 kic_runner.go:114] Args: [docker exec --privileged addons-303264 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1016 18:32:34.325008  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:32:34.342333  291068 machine.go:93] provisionDockerMachine start ...
	I1016 18:32:34.342423  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:32:34.359445  291068 main.go:141] libmachine: Using SSH client type: native
	I1016 18:32:34.359772  291068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1016 18:32:34.359783  291068 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:32:34.360451  291068 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 18:32:37.513239  291068 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-303264
	
	I1016 18:32:37.513266  291068 ubuntu.go:182] provisioning hostname "addons-303264"
	I1016 18:32:37.513332  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:32:37.531389  291068 main.go:141] libmachine: Using SSH client type: native
	I1016 18:32:37.531716  291068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1016 18:32:37.531734  291068 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-303264 && echo "addons-303264" | sudo tee /etc/hostname
	I1016 18:32:37.686897  291068 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-303264
	
	I1016 18:32:37.686975  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:32:37.704635  291068 main.go:141] libmachine: Using SSH client type: native
	I1016 18:32:37.704946  291068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1016 18:32:37.704966  291068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-303264' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-303264/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-303264' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:32:37.849396  291068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:32:37.849424  291068 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 18:32:37.849450  291068 ubuntu.go:190] setting up certificates
	I1016 18:32:37.849461  291068 provision.go:84] configureAuth start
	I1016 18:32:37.849523  291068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-303264
	I1016 18:32:37.866532  291068 provision.go:143] copyHostCerts
	I1016 18:32:37.866617  291068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 18:32:37.866759  291068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 18:32:37.866829  291068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 18:32:37.866892  291068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.addons-303264 san=[127.0.0.1 192.168.49.2 addons-303264 localhost minikube]
	I1016 18:32:38.098485  291068 provision.go:177] copyRemoteCerts
	I1016 18:32:38.098554  291068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:32:38.098598  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:32:38.117444  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:32:38.220910  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 18:32:38.238203  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1016 18:32:38.255933  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1016 18:32:38.273274  291068 provision.go:87] duration metric: took 423.786068ms to configureAuth
	I1016 18:32:38.273307  291068 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:32:38.273490  291068 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:32:38.273601  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:32:38.290252  291068 main.go:141] libmachine: Using SSH client type: native
	I1016 18:32:38.290565  291068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1016 18:32:38.290587  291068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:32:38.539578  291068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:32:38.539602  291068 machine.go:96] duration metric: took 4.197249119s to provisionDockerMachine
	I1016 18:32:38.539611  291068 client.go:171] duration metric: took 12.892649605s to LocalClient.Create
	I1016 18:32:38.539641  291068 start.go:167] duration metric: took 12.892723672s to libmachine.API.Create "addons-303264"
	I1016 18:32:38.539657  291068 start.go:293] postStartSetup for "addons-303264" (driver="docker")
	I1016 18:32:38.539668  291068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:32:38.539759  291068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:32:38.539805  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:32:38.556878  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:32:38.662895  291068 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:32:38.666583  291068 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:32:38.666614  291068 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:32:38.666626  291068 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 18:32:38.666715  291068 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 18:32:38.666777  291068 start.go:296] duration metric: took 127.112308ms for postStartSetup
	I1016 18:32:38.667120  291068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-303264
	I1016 18:32:38.684876  291068 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/config.json ...
	I1016 18:32:38.685273  291068 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:32:38.685337  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:32:38.702141  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:32:38.802436  291068 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:32:38.807070  291068 start.go:128] duration metric: took 13.163942831s to createHost
	I1016 18:32:38.807094  291068 start.go:83] releasing machines lock for "addons-303264", held for 13.164083637s
	I1016 18:32:38.807162  291068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-303264
	I1016 18:32:38.824464  291068 ssh_runner.go:195] Run: cat /version.json
	I1016 18:32:38.824488  291068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:32:38.824520  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:32:38.824560  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:32:38.844450  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:32:38.852336  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:32:39.034588  291068 ssh_runner.go:195] Run: systemctl --version
	I1016 18:32:39.040950  291068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:32:39.077434  291068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:32:39.081856  291068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:32:39.081960  291068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:32:39.110700  291068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1016 18:32:39.110733  291068 start.go:495] detecting cgroup driver to use...
	I1016 18:32:39.110766  291068 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 18:32:39.110832  291068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:32:39.127569  291068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:32:39.140376  291068 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:32:39.140438  291068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:32:39.158736  291068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:32:39.177990  291068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:32:39.291248  291068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:32:39.421189  291068 docker.go:234] disabling docker service ...
	I1016 18:32:39.421311  291068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:32:39.443225  291068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:32:39.456656  291068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:32:39.579336  291068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:32:39.694063  291068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:32:39.706857  291068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:32:39.720589  291068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:32:39.720672  291068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:32:39.729216  291068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 18:32:39.729291  291068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:32:39.737831  291068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:32:39.746136  291068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:32:39.754949  291068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:32:39.762817  291068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:32:39.771226  291068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:32:39.784128  291068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:32:39.793717  291068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:32:39.801111  291068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:32:39.808325  291068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:32:39.918821  291068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:32:40.047179  291068 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:32:40.047328  291068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:32:40.052953  291068 start.go:563] Will wait 60s for crictl version
	I1016 18:32:40.053085  291068 ssh_runner.go:195] Run: which crictl
	I1016 18:32:40.059549  291068 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:32:40.096586  291068 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:32:40.096758  291068 ssh_runner.go:195] Run: crio --version
	I1016 18:32:40.130836  291068 ssh_runner.go:195] Run: crio --version
	I1016 18:32:40.165035  291068 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:32:40.167888  291068 cli_runner.go:164] Run: docker network inspect addons-303264 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:32:40.184755  291068 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1016 18:32:40.188982  291068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:32:40.199779  291068 kubeadm.go:883] updating cluster {Name:addons-303264 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-303264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:32:40.199900  291068 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:32:40.199963  291068 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:32:40.236053  291068 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:32:40.236077  291068 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:32:40.236133  291068 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:32:40.263459  291068 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:32:40.263484  291068 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:32:40.263492  291068 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1016 18:32:40.263580  291068 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-303264 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-303264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:32:40.263672  291068 ssh_runner.go:195] Run: crio config
	I1016 18:32:40.336157  291068 cni.go:84] Creating CNI manager for ""
	I1016 18:32:40.336191  291068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:32:40.336213  291068 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:32:40.336261  291068 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-303264 NodeName:addons-303264 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:32:40.336439  291068 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-303264"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:32:40.336528  291068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:32:40.344661  291068 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:32:40.344753  291068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 18:32:40.352884  291068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1016 18:32:40.366752  291068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:32:40.379513  291068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1016 18:32:40.392958  291068 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1016 18:32:40.396706  291068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:32:40.405979  291068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:32:40.516390  291068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:32:40.533353  291068 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264 for IP: 192.168.49.2
	I1016 18:32:40.533378  291068 certs.go:195] generating shared ca certs ...
	I1016 18:32:40.533394  291068 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:40.533599  291068 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 18:32:40.877674  291068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt ...
	I1016 18:32:40.877705  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt: {Name:mk27fd733cad0eb66b2f3a98a14dd84398d1eaa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:40.877933  291068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key ...
	I1016 18:32:40.877950  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key: {Name:mkc1ec8ff0d3175e6851ad88a1f8aae31f527492 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:40.878047  291068 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 18:32:41.426674  291068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt ...
	I1016 18:32:41.426703  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt: {Name:mk6439ddace249e2586a7fd1718c7a829265fdab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:41.426892  291068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key ...
	I1016 18:32:41.426906  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key: {Name:mk33f0a1918158d348ed027ab4286c18ae5c709e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:41.426996  291068 certs.go:257] generating profile certs ...
	I1016 18:32:41.427057  291068 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.key
	I1016 18:32:41.427078  291068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt with IP's: []
	I1016 18:32:42.000589  291068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt ...
	I1016 18:32:42.000619  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: {Name:mk6b93ac1ce658048e7994efc7ba4a2cc77453a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:42.000812  291068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.key ...
	I1016 18:32:42.000825  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.key: {Name:mk594099792513e66d42a18369006a4332135bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:42.000912  291068 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.key.249fcb45
	I1016 18:32:42.000933  291068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.crt.249fcb45 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1016 18:32:42.507832  291068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.crt.249fcb45 ...
	I1016 18:32:42.507863  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.crt.249fcb45: {Name:mk285f0e178bcbdb668019dc814db28d26e6406f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:42.508047  291068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.key.249fcb45 ...
	I1016 18:32:42.508064  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.key.249fcb45: {Name:mkef5dc30d88340b24c58b0f1aa5ee11d71308cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:42.508154  291068 certs.go:382] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.crt.249fcb45 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.crt
	I1016 18:32:42.508241  291068 certs.go:386] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.key.249fcb45 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.key
	I1016 18:32:42.508303  291068 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/proxy-client.key
	I1016 18:32:42.508324  291068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/proxy-client.crt with IP's: []
	I1016 18:32:42.618470  291068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/proxy-client.crt ...
	I1016 18:32:42.618503  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/proxy-client.crt: {Name:mk928b3858ec1e54cb9bb0aabd6ebc3dd71a4ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:42.619415  291068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/proxy-client.key ...
	I1016 18:32:42.619452  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/proxy-client.key: {Name:mkef6e3a58a143aecde32f301b1971211247a1b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:42.619709  291068 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 18:32:42.619773  291068 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 18:32:42.619807  291068 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:32:42.619852  291068 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 18:32:42.620539  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:32:42.639641  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 18:32:42.658250  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:32:42.676725  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 18:32:42.694443  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1016 18:32:42.711833  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1016 18:32:42.729467  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:32:42.747068  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1016 18:32:42.765831  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:32:42.783340  291068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:32:42.795781  291068 ssh_runner.go:195] Run: openssl version
	I1016 18:32:42.802038  291068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:32:42.810455  291068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:32:42.814367  291068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:32:42.814455  291068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:32:42.855115  291068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:32:42.863460  291068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:32:42.867901  291068 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1016 18:32:42.867948  291068 kubeadm.go:400] StartCluster: {Name:addons-303264 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-303264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:32:42.868022  291068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:32:42.868086  291068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:32:42.893536  291068 cri.go:89] found id: ""
	I1016 18:32:42.893607  291068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:32:42.901212  291068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 18:32:42.910407  291068 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1016 18:32:42.910527  291068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 18:32:42.921291  291068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1016 18:32:42.921351  291068 kubeadm.go:157] found existing configuration files:
	
	I1016 18:32:42.921422  291068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1016 18:32:42.931247  291068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1016 18:32:42.931363  291068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1016 18:32:42.939160  291068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1016 18:32:42.947698  291068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1016 18:32:42.947807  291068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 18:32:42.955556  291068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1016 18:32:42.964037  291068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1016 18:32:42.964148  291068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1016 18:32:42.972831  291068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1016 18:32:42.980282  291068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1016 18:32:42.980369  291068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 18:32:42.987590  291068 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1016 18:32:43.025419  291068 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1016 18:32:43.025792  291068 kubeadm.go:318] [preflight] Running pre-flight checks
	I1016 18:32:43.050704  291068 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1016 18:32:43.050843  291068 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1016 18:32:43.050904  291068 kubeadm.go:318] OS: Linux
	I1016 18:32:43.050981  291068 kubeadm.go:318] CGROUPS_CPU: enabled
	I1016 18:32:43.051060  291068 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1016 18:32:43.051142  291068 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1016 18:32:43.051218  291068 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1016 18:32:43.051293  291068 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1016 18:32:43.051415  291068 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1016 18:32:43.051489  291068 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1016 18:32:43.051545  291068 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1016 18:32:43.051599  291068 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1016 18:32:43.121562  291068 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1016 18:32:43.121725  291068 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1016 18:32:43.121865  291068 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1016 18:32:43.133175  291068 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1016 18:32:43.139246  291068 out.go:252]   - Generating certificates and keys ...
	I1016 18:32:43.139422  291068 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 18:32:43.139525  291068 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 18:32:43.738804  291068 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1016 18:32:44.048480  291068 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1016 18:32:45.317059  291068 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1016 18:32:45.514933  291068 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1016 18:32:45.852310  291068 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1016 18:32:45.852694  291068 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-303264 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1016 18:32:46.522530  291068 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1016 18:32:46.522905  291068 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-303264 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1016 18:32:47.279454  291068 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 18:32:48.548178  291068 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 18:32:48.632350  291068 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 18:32:48.632657  291068 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 18:32:49.062712  291068 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 18:32:49.416352  291068 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 18:32:49.894072  291068 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 18:32:50.490452  291068 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 18:32:50.748527  291068 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 18:32:50.748999  291068 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 18:32:50.754054  291068 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 18:32:50.757602  291068 out.go:252]   - Booting up control plane ...
	I1016 18:32:50.757721  291068 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 18:32:50.757804  291068 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 18:32:50.757873  291068 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 18:32:50.772154  291068 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 18:32:50.772286  291068 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 18:32:50.780164  291068 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 18:32:50.780605  291068 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 18:32:50.780898  291068 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 18:32:50.904518  291068 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 18:32:50.904646  291068 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 18:32:52.406101  291068 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501623429s
	I1016 18:32:52.410513  291068 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 18:32:52.410650  291068 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1016 18:32:52.410783  291068 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 18:32:52.410955  291068 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 18:32:55.743933  291068 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.331528243s
	I1016 18:32:57.076407  291068 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.664450467s
	I1016 18:32:57.915051  291068 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.502856329s
	I1016 18:32:57.936813  291068 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 18:32:57.952915  291068 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 18:32:57.971798  291068 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 18:32:57.972048  291068 kubeadm.go:318] [mark-control-plane] Marking the node addons-303264 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 18:32:57.989598  291068 kubeadm.go:318] [bootstrap-token] Using token: jh1ftm.0q5a4qmrb00w77x3
	I1016 18:32:57.992938  291068 out.go:252]   - Configuring RBAC rules ...
	I1016 18:32:57.993067  291068 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 18:32:58.001491  291068 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 18:32:58.013474  291068 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 18:32:58.018187  291068 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 18:32:58.024445  291068 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 18:32:58.029919  291068 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 18:32:58.324815  291068 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 18:32:58.755227  291068 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 18:32:59.321778  291068 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 18:32:59.322887  291068 kubeadm.go:318] 
	I1016 18:32:59.322967  291068 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 18:32:59.322973  291068 kubeadm.go:318] 
	I1016 18:32:59.323055  291068 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 18:32:59.323060  291068 kubeadm.go:318] 
	I1016 18:32:59.323102  291068 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 18:32:59.323195  291068 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 18:32:59.323265  291068 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 18:32:59.323275  291068 kubeadm.go:318] 
	I1016 18:32:59.323336  291068 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 18:32:59.323341  291068 kubeadm.go:318] 
	I1016 18:32:59.323403  291068 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 18:32:59.323416  291068 kubeadm.go:318] 
	I1016 18:32:59.323482  291068 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 18:32:59.323569  291068 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 18:32:59.323645  291068 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 18:32:59.323654  291068 kubeadm.go:318] 
	I1016 18:32:59.323744  291068 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 18:32:59.323840  291068 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 18:32:59.323852  291068 kubeadm.go:318] 
	I1016 18:32:59.323951  291068 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token jh1ftm.0q5a4qmrb00w77x3 \
	I1016 18:32:59.324086  291068 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:85a5ed38e8d77f9704e1d1edb99d03efd3921606f6351dbe0cfb02fc48526847 \
	I1016 18:32:59.324116  291068 kubeadm.go:318] 	--control-plane 
	I1016 18:32:59.324124  291068 kubeadm.go:318] 
	I1016 18:32:59.324213  291068 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 18:32:59.324223  291068 kubeadm.go:318] 
	I1016 18:32:59.324318  291068 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token jh1ftm.0q5a4qmrb00w77x3 \
	I1016 18:32:59.324434  291068 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:85a5ed38e8d77f9704e1d1edb99d03efd3921606f6351dbe0cfb02fc48526847 
	I1016 18:32:59.327808  291068 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1016 18:32:59.328061  291068 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1016 18:32:59.328177  291068 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1016 18:32:59.328198  291068 cni.go:84] Creating CNI manager for ""
	I1016 18:32:59.328209  291068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:32:59.331527  291068 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 18:32:59.334618  291068 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 18:32:59.338949  291068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 18:32:59.338972  291068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 18:32:59.351864  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 18:32:59.613121  291068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 18:32:59.613304  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:32:59.613377  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-303264 minikube.k8s.io/updated_at=2025_10_16T18_32_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=addons-303264 minikube.k8s.io/primary=true
	I1016 18:32:59.802952  291068 ops.go:34] apiserver oom_adj: -16
	I1016 18:32:59.803052  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:33:00.305557  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:33:00.803186  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:33:01.304030  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:33:01.803192  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:33:02.303256  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:33:02.804162  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:33:03.304124  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:33:03.803759  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:33:03.921907  291068 kubeadm.go:1113] duration metric: took 4.308658288s to wait for elevateKubeSystemPrivileges
	I1016 18:33:03.921933  291068 kubeadm.go:402] duration metric: took 21.053986989s to StartCluster
	I1016 18:33:03.921950  291068 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:33:03.922063  291068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:33:03.922517  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:33:03.922730  291068 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:33:03.922876  291068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 18:33:03.923108  291068 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:33:03.923138  291068 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1016 18:33:03.923213  291068 addons.go:69] Setting yakd=true in profile "addons-303264"
	I1016 18:33:03.923231  291068 addons.go:238] Setting addon yakd=true in "addons-303264"
	I1016 18:33:03.923255  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:03.923704  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.924276  291068 addons.go:69] Setting metrics-server=true in profile "addons-303264"
	I1016 18:33:03.924298  291068 addons.go:238] Setting addon metrics-server=true in "addons-303264"
	I1016 18:33:03.924319  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:03.924730  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.924885  291068 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-303264"
	I1016 18:33:03.924900  291068 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-303264"
	I1016 18:33:03.924919  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:03.925356  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.927924  291068 addons.go:69] Setting registry=true in profile "addons-303264"
	I1016 18:33:03.927950  291068 addons.go:238] Setting addon registry=true in "addons-303264"
	I1016 18:33:03.927984  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:03.928415  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.928965  291068 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-303264"
	I1016 18:33:03.929043  291068 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-303264"
	I1016 18:33:03.930128  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:03.931462  291068 addons.go:69] Setting cloud-spanner=true in profile "addons-303264"
	I1016 18:33:03.931482  291068 addons.go:238] Setting addon cloud-spanner=true in "addons-303264"
	I1016 18:33:03.931504  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:03.931894  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.933033  291068 addons.go:69] Setting registry-creds=true in profile "addons-303264"
	I1016 18:33:03.933080  291068 addons.go:238] Setting addon registry-creds=true in "addons-303264"
	I1016 18:33:03.933239  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:03.933748  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.942796  291068 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-303264"
	I1016 18:33:03.942876  291068 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-303264"
	I1016 18:33:03.942912  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:03.943383  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.945725  291068 addons.go:69] Setting storage-provisioner=true in profile "addons-303264"
	I1016 18:33:03.945807  291068 addons.go:238] Setting addon storage-provisioner=true in "addons-303264"
	I1016 18:33:03.945884  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:03.946456  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.958323  291068 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-303264"
	I1016 18:33:03.958358  291068 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-303264"
	I1016 18:33:03.958707  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.958839  291068 addons.go:69] Setting default-storageclass=true in profile "addons-303264"
	I1016 18:33:03.958851  291068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-303264"
	I1016 18:33:03.959087  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.974610  291068 addons.go:69] Setting volcano=true in profile "addons-303264"
	I1016 18:33:03.974644  291068 addons.go:238] Setting addon volcano=true in "addons-303264"
	I1016 18:33:03.974681  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:03.975184  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.977216  291068 addons.go:69] Setting gcp-auth=true in profile "addons-303264"
	I1016 18:33:03.977252  291068 mustload.go:65] Loading cluster: addons-303264
	I1016 18:33:03.977526  291068 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:33:03.977794  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:04.007371  291068 addons.go:69] Setting volumesnapshots=true in profile "addons-303264"
	I1016 18:33:04.007415  291068 addons.go:238] Setting addon volumesnapshots=true in "addons-303264"
	I1016 18:33:04.007455  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:04.008177  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:04.011909  291068 addons.go:69] Setting ingress=true in profile "addons-303264"
	I1016 18:33:04.012000  291068 addons.go:238] Setting addon ingress=true in "addons-303264"
	I1016 18:33:04.012078  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:04.012730  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:04.025002  291068 addons.go:69] Setting ingress-dns=true in profile "addons-303264"
	I1016 18:33:04.025130  291068 addons.go:238] Setting addon ingress-dns=true in "addons-303264"
	I1016 18:33:04.025354  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:04.025746  291068 out.go:179] * Verifying Kubernetes components...
	I1016 18:33:04.138317  291068 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:33:04.141259  291068 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:33:04.141283  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:33:04.141354  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.026618  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:04.026628  291068 addons.go:69] Setting inspektor-gadget=true in profile "addons-303264"
	I1016 18:33:04.163024  291068 addons.go:238] Setting addon inspektor-gadget=true in "addons-303264"
	I1016 18:33:04.163096  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:04.163700  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:04.181819  291068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:33:04.182279  291068 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1016 18:33:04.185957  291068 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1016 18:33:04.185980  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1016 18:33:04.186044  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.186616  291068 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1016 18:33:04.212033  291068 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1016 18:33:04.215090  291068 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1016 18:33:04.215167  291068 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1016 18:33:04.215279  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	W1016 18:33:04.219821  291068 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1016 18:33:04.225290  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:04.062723  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:04.248734  291068 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1016 18:33:04.248863  291068 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1016 18:33:04.248905  291068 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1016 18:33:04.251599  291068 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1016 18:33:04.251621  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1016 18:33:04.251691  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.262233  291068 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-303264"
	I1016 18:33:04.262299  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:04.262704  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:04.273430  291068 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1016 18:33:04.277203  291068 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1016 18:33:04.277229  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1016 18:33:04.277294  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.296590  291068 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1016 18:33:04.296610  291068 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1016 18:33:04.296673  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.299156  291068 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1016 18:33:04.299176  291068 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1016 18:33:04.299239  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.321415  291068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1016 18:33:04.329275  291068 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1016 18:33:04.333407  291068 addons.go:238] Setting addon default-storageclass=true in "addons-303264"
	I1016 18:33:04.333453  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:04.333863  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:04.351599  291068 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1016 18:33:04.351781  291068 out.go:179]   - Using image docker.io/registry:3.0.0
	I1016 18:33:04.363644  291068 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1016 18:33:04.378876  291068 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1016 18:33:04.380159  291068 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1016 18:33:04.380179  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1016 18:33:04.380246  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.411026  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.414017  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.421777  291068 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1016 18:33:04.430458  291068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 18:33:04.433344  291068 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1016 18:33:04.442815  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1016 18:33:04.433369  291068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1016 18:33:04.451005  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.454290  291068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1016 18:33:04.454967  291068 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1016 18:33:04.455040  291068 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1016 18:33:04.455576  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.457395  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.461549  291068 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1016 18:33:04.461577  291068 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1016 18:33:04.461642  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.471182  291068 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1016 18:33:04.473306  291068 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1016 18:33:04.473330  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1016 18:33:04.473396  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.482874  291068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1016 18:33:04.492257  291068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1016 18:33:04.495258  291068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1016 18:33:04.498098  291068 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1016 18:33:04.498125  291068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1016 18:33:04.498191  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.505485  291068 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1016 18:33:04.505506  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1016 18:33:04.505568  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.535998  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.552389  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.553329  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.555390  291068 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1016 18:33:04.560140  291068 out.go:179]   - Using image docker.io/busybox:stable
	I1016 18:33:04.565987  291068 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1016 18:33:04.566010  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1016 18:33:04.566081  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.570721  291068 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:33:04.570747  291068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:33:04.570814  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.641490  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.642663  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.678929  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.691610  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.697546  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.702743  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.705404  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.720507  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.736871  291068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:33:05.182602  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1016 18:33:05.246321  291068 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1016 18:33:05.246344  291068 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1016 18:33:05.274668  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1016 18:33:05.329612  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1016 18:33:05.335034  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:33:05.351938  291068 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1016 18:33:05.352010  291068 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1016 18:33:05.368128  291068 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1016 18:33:05.368204  291068 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1016 18:33:05.404766  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1016 18:33:05.436805  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1016 18:33:05.460273  291068 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1016 18:33:05.460344  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1016 18:33:05.462788  291068 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:05.462860  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1016 18:33:05.479300  291068 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1016 18:33:05.479376  291068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1016 18:33:05.484796  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1016 18:33:05.487825  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:33:05.525593  291068 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1016 18:33:05.525674  291068 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1016 18:33:05.556971  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1016 18:33:05.586493  291068 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1016 18:33:05.586571  291068 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1016 18:33:05.603805  291068 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1016 18:33:05.603890  291068 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1016 18:33:05.717222  291068 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1016 18:33:05.717294  291068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1016 18:33:05.731215  291068 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1016 18:33:05.731281  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1016 18:33:05.731561  291068 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1016 18:33:05.731598  291068 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1016 18:33:05.761711  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:05.826455  291068 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1016 18:33:05.826521  291068 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1016 18:33:05.835934  291068 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1016 18:33:05.836015  291068 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1016 18:33:05.879021  291068 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1016 18:33:05.879097  291068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1016 18:33:05.930812  291068 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1016 18:33:05.930875  291068 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1016 18:33:05.957818  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1016 18:33:05.968485  291068 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1016 18:33:05.968552  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1016 18:33:06.045663  291068 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1016 18:33:06.045741  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1016 18:33:06.105259  291068 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1016 18:33:06.105529  291068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1016 18:33:06.140580  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1016 18:33:06.167566  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1016 18:33:06.175336  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1016 18:33:06.256206  291068 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1016 18:33:06.256278  291068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1016 18:33:06.511012  291068 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1016 18:33:06.511040  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1016 18:33:06.621301  291068 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.183844207s)
	I1016 18:33:06.621331  291068 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1016 18:33:06.622254  291068 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.885356272s)
	I1016 18:33:06.622858  291068 node_ready.go:35] waiting up to 6m0s for node "addons-303264" to be "Ready" ...
	I1016 18:33:06.623024  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.440340782s)
	I1016 18:33:06.807679  291068 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1016 18:33:06.807706  291068 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1016 18:33:07.005935  291068 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1016 18:33:07.006007  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1016 18:33:07.128375  291068 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-303264" context rescaled to 1 replicas
	I1016 18:33:07.218548  291068 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1016 18:33:07.218613  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1016 18:33:07.398038  291068 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1016 18:33:07.398106  291068 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1016 18:33:07.608248  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1016 18:33:08.627125  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:10.312329  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.03757901s)
	I1016 18:33:10.312367  291068 addons.go:479] Verifying addon ingress=true in "addons-303264"
	I1016 18:33:10.312531  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.982851627s)
	I1016 18:33:10.312675  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.977575665s)
	I1016 18:33:10.312721  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.907890973s)
	I1016 18:33:10.312781  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.875879765s)
	I1016 18:33:10.312844  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.827978646s)
	I1016 18:33:10.312897  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.824977322s)
	I1016 18:33:10.313010  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.755974911s)
	I1016 18:33:10.313090  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.551304871s)
	W1016 18:33:10.313108  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:10.313174  291068 retry.go:31] will retry after 312.12427ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:10.313213  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.355311742s)
	I1016 18:33:10.313242  291068 addons.go:479] Verifying addon registry=true in "addons-303264"
	I1016 18:33:10.313347  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.172734351s)
	I1016 18:33:10.313362  291068 addons.go:479] Verifying addon metrics-server=true in "addons-303264"
	I1016 18:33:10.313448  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.145858329s)
	W1016 18:33:10.313461  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1016 18:33:10.313470  291068 retry.go:31] will retry after 261.192275ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1016 18:33:10.313524  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.138117063s)
	I1016 18:33:10.315669  291068 out.go:179] * Verifying ingress addon...
	I1016 18:33:10.320322  291068 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1016 18:33:10.321112  291068 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-303264 service yakd-dashboard -n yakd-dashboard
	
	I1016 18:33:10.321265  291068 out.go:179] * Verifying registry addon...
	I1016 18:33:10.324697  291068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1016 18:33:10.328017  291068 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1016 18:33:10.328034  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:10.333427  291068 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1016 18:33:10.333444  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 18:33:10.335137  291068 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1016 18:33:10.575032  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1016 18:33:10.578634  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.970295555s)
	I1016 18:33:10.578670  291068 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-303264"
	I1016 18:33:10.581764  291068 out.go:179] * Verifying csi-hostpath-driver addon...
	I1016 18:33:10.585510  291068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1016 18:33:10.601063  291068 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1016 18:33:10.601085  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:10.625654  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:10.824274  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:10.827889  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:11.090069  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:11.127111  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:11.324368  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:11.327817  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:11.590741  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:11.824075  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:11.827523  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:11.914839  291068 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1016 18:33:11.914927  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:11.933920  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:12.047504  291068 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1016 18:33:12.061553  291068 addons.go:238] Setting addon gcp-auth=true in "addons-303264"
	I1016 18:33:12.061648  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:12.062113  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:12.089639  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:12.090028  291068 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1016 18:33:12.090081  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:12.107482  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:12.323722  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:12.327198  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:12.589417  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:12.823414  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:12.827934  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:13.090542  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:13.127224  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:13.324232  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:13.328028  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:13.354321  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.779237805s)
	I1016 18:33:13.354449  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.728770468s)
	I1016 18:33:13.354521  291068 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.264472702s)
	W1016 18:33:13.354666  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:13.354694  291068 retry.go:31] will retry after 253.939228ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:13.357533  291068 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1016 18:33:13.360627  291068 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1016 18:33:13.363576  291068 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1016 18:33:13.363602  291068 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1016 18:33:13.378604  291068 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1016 18:33:13.378627  291068 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1016 18:33:13.391735  291068 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1016 18:33:13.391805  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1016 18:33:13.407042  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1016 18:33:13.590379  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:13.609756  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:13.826948  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:13.837826  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:14.059579  291068 addons.go:479] Verifying addon gcp-auth=true in "addons-303264"
	I1016 18:33:14.062498  291068 out.go:179] * Verifying gcp-auth addon...
	I1016 18:33:14.066260  291068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1016 18:33:14.074654  291068 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1016 18:33:14.074728  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:14.175110  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:14.324189  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:14.327719  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 18:33:14.543256  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:14.543293  291068 retry.go:31] will retry after 535.687382ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:14.570043  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:14.589456  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:14.823594  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:14.829065  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:15.072337  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:15.079676  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:15.089948  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:15.323327  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:15.327976  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:15.569658  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:15.589335  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:15.626767  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:15.826729  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:15.830464  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 18:33:15.887491  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:15.887522  291068 retry.go:31] will retry after 1.254627435s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:16.070272  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:16.089399  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:16.324096  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:16.327583  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:16.573282  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:16.589073  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:16.823873  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:16.827361  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:17.072707  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:17.088541  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:17.142530  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:17.324483  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:17.328187  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:17.570190  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:17.589520  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:17.824892  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:17.827128  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 18:33:17.946142  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:17.946176  291068 retry.go:31] will retry after 1.306011986s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:18.069112  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:18.089001  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:18.125866  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:18.323969  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:18.327212  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:18.569425  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:18.589111  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:18.823617  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:18.828210  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:19.072770  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:19.088659  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:19.253330  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:19.324627  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:19.337857  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:19.570008  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:19.588979  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:19.824556  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:19.827505  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:20.070516  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1016 18:33:20.086827  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:20.086862  291068 retry.go:31] will retry after 2.363936981s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:20.089462  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:20.126307  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:20.324090  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:20.327809  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:20.569811  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:20.588674  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:20.823066  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:20.827358  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:21.070481  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:21.089221  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:21.324110  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:21.327705  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:21.569743  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:21.589850  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:21.824378  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:21.827809  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:22.070998  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:22.088911  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:22.323825  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:22.327985  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:22.451297  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:22.570254  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:22.588917  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:22.625685  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:22.824613  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:22.828411  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:23.072877  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:23.089912  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:23.318537  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:23.318580  291068 retry.go:31] will retry after 2.580885903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:23.323834  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:23.327502  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:23.569202  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:23.588945  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:23.824090  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:23.828257  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:24.071263  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:24.089933  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:24.324378  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:24.328083  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:24.570252  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:24.589216  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:24.625866  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:24.823987  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:24.827394  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:25.070412  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:25.089341  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:25.323886  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:25.328595  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:25.569979  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:25.589209  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:25.824524  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:25.828126  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:25.900499  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:26.070773  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:26.089271  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:26.323023  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:26.327815  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:26.569887  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:26.588988  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:26.709317  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:26.709350  291068 retry.go:31] will retry after 2.380864454s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:26.823480  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:26.828419  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:27.070847  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:27.088764  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:27.126591  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:27.323726  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:27.327283  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:27.569561  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:27.588781  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:27.824101  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:27.827746  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:28.070985  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:28.088988  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:28.323469  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:28.327775  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:28.569893  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:28.589233  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:28.825417  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:28.827372  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:29.070030  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:29.089169  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:29.091223  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1016 18:33:29.127024  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:29.325425  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:29.328124  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:29.569870  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:29.589410  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:29.825835  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:29.829000  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 18:33:29.925601  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:29.925660  291068 retry.go:31] will retry after 8.291322723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:30.081962  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:30.089749  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:30.324035  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:30.327804  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:30.570037  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:30.589041  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:30.824069  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:30.827448  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:31.072403  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:31.088506  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:31.323900  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:31.327147  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:31.569101  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:31.589270  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:31.625961  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:31.824420  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:31.827705  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:32.072183  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:32.089372  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:32.323444  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:32.328063  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:32.569233  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:32.588585  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:32.823164  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:32.827787  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:33.070103  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:33.089571  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:33.323892  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:33.327309  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:33.569074  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:33.588917  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:33.823887  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:33.828311  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:34.071331  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:34.089443  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:34.127010  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:34.324305  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:34.330587  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:34.569640  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:34.588452  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:34.824336  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:34.827891  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:35.072561  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:35.089379  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:35.323802  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:35.328216  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:35.570055  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:35.588736  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:35.823537  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:35.828143  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:36.071665  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:36.088499  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:36.324030  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:36.327545  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:36.569920  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:36.588789  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:36.626263  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:36.824047  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:36.827365  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:37.070568  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:37.089733  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:37.323840  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:37.327492  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:37.569476  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:37.588476  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:37.823944  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:37.827325  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:38.071642  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:38.089106  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:38.217334  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:38.323899  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:38.327580  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:38.569792  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:38.589055  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:38.626551  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:38.824284  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:38.827626  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 18:33:39.020293  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:39.020328  291068 retry.go:31] will retry after 9.933327258s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:39.070674  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:39.089125  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:39.323949  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:39.327303  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:39.570002  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:39.589366  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:39.823643  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:39.828135  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:40.074094  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:40.089233  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:40.323234  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:40.327735  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:40.569727  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:40.588590  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:40.823982  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:40.827442  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:41.069354  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:41.089375  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:41.126164  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:41.324668  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:41.328180  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:41.569423  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:41.590127  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:41.824186  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:41.827593  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:42.069743  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:42.088888  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:42.324143  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:42.327841  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:42.569777  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:42.588772  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:42.823528  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:42.828241  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:43.069511  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:43.088395  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:43.126270  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:43.323393  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:43.327986  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:43.570225  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:43.588992  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:43.823731  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:43.828036  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:44.071786  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:44.089345  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:44.323902  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:44.327673  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:44.570235  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:44.589124  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:44.823931  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:44.827117  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:45.082762  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:45.096257  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:45.127584  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:45.327894  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:45.332161  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:45.570192  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:45.588952  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:45.832328  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:45.843364  291068 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1016 18:33:45.843390  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:46.122153  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:46.123346  291068 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1016 18:33:46.123384  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:46.131973  291068 node_ready.go:49] node "addons-303264" is "Ready"
	I1016 18:33:46.132004  291068 node_ready.go:38] duration metric: took 39.509119258s for node "addons-303264" to be "Ready" ...
	I1016 18:33:46.132018  291068 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:33:46.132075  291068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:33:46.157075  291068 api_server.go:72] duration metric: took 42.234316237s to wait for apiserver process to appear ...
	I1016 18:33:46.157100  291068 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:33:46.157121  291068 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:33:46.177886  291068 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1016 18:33:46.182466  291068 api_server.go:141] control plane version: v1.34.1
	I1016 18:33:46.182496  291068 api_server.go:131] duration metric: took 25.388853ms to wait for apiserver health ...
	I1016 18:33:46.182506  291068 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:33:46.215655  291068 system_pods.go:59] 19 kube-system pods found
	I1016 18:33:46.215697  291068 system_pods.go:61] "coredns-66bc5c9577-8ztvw" [39553a90-b0aa-4683-abfe-867cb5c35ca2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:33:46.215707  291068 system_pods.go:61] "csi-hostpath-attacher-0" [9778b6d4-35ad-4e1a-9cf9-e68872db8da2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1016 18:33:46.215716  291068 system_pods.go:61] "csi-hostpath-resizer-0" [fbd0e89f-2c7d-4789-9747-9c121ae74bf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1016 18:33:46.215723  291068 system_pods.go:61] "csi-hostpathplugin-5z9bs" [03d5d6c8-db8c-449a-ba7a-8bdb9825c3a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1016 18:33:46.215728  291068 system_pods.go:61] "etcd-addons-303264" [a11fd941-580f-4bf5-b3b0-f63f082b7ea4] Running
	I1016 18:33:46.215733  291068 system_pods.go:61] "kindnet-mbblc" [7b8d0f9b-d177-4af5-85b5-ccd94f3a0449] Running
	I1016 18:33:46.215738  291068 system_pods.go:61] "kube-apiserver-addons-303264" [f18f501b-1831-40e2-8f9d-e5e92fa0b9dc] Running
	I1016 18:33:46.215743  291068 system_pods.go:61] "kube-controller-manager-addons-303264" [c1f2a093-2eb1-48d4-90ce-74fb0a24ee8a] Running
	I1016 18:33:46.215748  291068 system_pods.go:61] "kube-ingress-dns-minikube" [4c985e3a-06af-43df-b8cb-3e52efd16bcb] Pending
	I1016 18:33:46.215752  291068 system_pods.go:61] "kube-proxy-vfskf" [a0e25247-8b51-483a-8f53-8243d41ef9b5] Running
	I1016 18:33:46.215759  291068 system_pods.go:61] "kube-scheduler-addons-303264" [f7908d6d-be06-4cbf-8b15-7b43f4c72627] Running
	I1016 18:33:46.215765  291068 system_pods.go:61] "metrics-server-85b7d694d7-2pqhh" [39e00c5f-539c-4f89-8610-7975265868ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 18:33:46.215776  291068 system_pods.go:61] "nvidia-device-plugin-daemonset-frsg8" [9b71f6fc-8aad-4d80-b73c-bc6df9bd0a6d] Pending
	I1016 18:33:46.215784  291068 system_pods.go:61] "registry-6b586f9694-tt65k" [25f718b4-be75-437f-a793-49619e3a4306] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 18:33:46.215792  291068 system_pods.go:61] "registry-creds-764b6fb674-25wdq" [2264cbde-5cda-424e-8a82-3fc4b7eeafe2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 18:33:46.215800  291068 system_pods.go:61] "registry-proxy-jktvf" [e60cff58-6e3a-4e66-90e2-ebcb83be567a] Pending
	I1016 18:33:46.215809  291068 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7ncgr" [225729d2-76cb-40c0-bba9-78908c09c591] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 18:33:46.215821  291068 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9gxlc" [81b637c9-900e-4ffd-92fb-785bc9414d6f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 18:33:46.215826  291068 system_pods.go:61] "storage-provisioner" [4bd1d8bb-9204-4426-a2be-f6fd29a6f308] Pending
	I1016 18:33:46.215834  291068 system_pods.go:74] duration metric: took 33.321701ms to wait for pod list to return data ...
	I1016 18:33:46.215846  291068 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:33:46.219628  291068 default_sa.go:45] found service account: "default"
	I1016 18:33:46.219653  291068 default_sa.go:55] duration metric: took 3.800858ms for default service account to be created ...
	I1016 18:33:46.219662  291068 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 18:33:46.236967  291068 system_pods.go:86] 19 kube-system pods found
	I1016 18:33:46.237006  291068 system_pods.go:89] "coredns-66bc5c9577-8ztvw" [39553a90-b0aa-4683-abfe-867cb5c35ca2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:33:46.237015  291068 system_pods.go:89] "csi-hostpath-attacher-0" [9778b6d4-35ad-4e1a-9cf9-e68872db8da2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1016 18:33:46.237024  291068 system_pods.go:89] "csi-hostpath-resizer-0" [fbd0e89f-2c7d-4789-9747-9c121ae74bf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1016 18:33:46.237030  291068 system_pods.go:89] "csi-hostpathplugin-5z9bs" [03d5d6c8-db8c-449a-ba7a-8bdb9825c3a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1016 18:33:46.237035  291068 system_pods.go:89] "etcd-addons-303264" [a11fd941-580f-4bf5-b3b0-f63f082b7ea4] Running
	I1016 18:33:46.237040  291068 system_pods.go:89] "kindnet-mbblc" [7b8d0f9b-d177-4af5-85b5-ccd94f3a0449] Running
	I1016 18:33:46.237044  291068 system_pods.go:89] "kube-apiserver-addons-303264" [f18f501b-1831-40e2-8f9d-e5e92fa0b9dc] Running
	I1016 18:33:46.237048  291068 system_pods.go:89] "kube-controller-manager-addons-303264" [c1f2a093-2eb1-48d4-90ce-74fb0a24ee8a] Running
	I1016 18:33:46.237053  291068 system_pods.go:89] "kube-ingress-dns-minikube" [4c985e3a-06af-43df-b8cb-3e52efd16bcb] Pending
	I1016 18:33:46.237061  291068 system_pods.go:89] "kube-proxy-vfskf" [a0e25247-8b51-483a-8f53-8243d41ef9b5] Running
	I1016 18:33:46.237067  291068 system_pods.go:89] "kube-scheduler-addons-303264" [f7908d6d-be06-4cbf-8b15-7b43f4c72627] Running
	I1016 18:33:46.237079  291068 system_pods.go:89] "metrics-server-85b7d694d7-2pqhh" [39e00c5f-539c-4f89-8610-7975265868ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 18:33:46.237083  291068 system_pods.go:89] "nvidia-device-plugin-daemonset-frsg8" [9b71f6fc-8aad-4d80-b73c-bc6df9bd0a6d] Pending
	I1016 18:33:46.237090  291068 system_pods.go:89] "registry-6b586f9694-tt65k" [25f718b4-be75-437f-a793-49619e3a4306] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 18:33:46.237099  291068 system_pods.go:89] "registry-creds-764b6fb674-25wdq" [2264cbde-5cda-424e-8a82-3fc4b7eeafe2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 18:33:46.237104  291068 system_pods.go:89] "registry-proxy-jktvf" [e60cff58-6e3a-4e66-90e2-ebcb83be567a] Pending
	I1016 18:33:46.237112  291068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7ncgr" [225729d2-76cb-40c0-bba9-78908c09c591] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 18:33:46.237123  291068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9gxlc" [81b637c9-900e-4ffd-92fb-785bc9414d6f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 18:33:46.237127  291068 system_pods.go:89] "storage-provisioner" [4bd1d8bb-9204-4426-a2be-f6fd29a6f308] Pending
	I1016 18:33:46.237205  291068 retry.go:31] will retry after 264.166941ms: missing components: kube-dns
	I1016 18:33:46.335731  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:46.338492  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:46.517581  291068 system_pods.go:86] 19 kube-system pods found
	I1016 18:33:46.517622  291068 system_pods.go:89] "coredns-66bc5c9577-8ztvw" [39553a90-b0aa-4683-abfe-867cb5c35ca2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:33:46.517631  291068 system_pods.go:89] "csi-hostpath-attacher-0" [9778b6d4-35ad-4e1a-9cf9-e68872db8da2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1016 18:33:46.517641  291068 system_pods.go:89] "csi-hostpath-resizer-0" [fbd0e89f-2c7d-4789-9747-9c121ae74bf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1016 18:33:46.517649  291068 system_pods.go:89] "csi-hostpathplugin-5z9bs" [03d5d6c8-db8c-449a-ba7a-8bdb9825c3a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1016 18:33:46.517654  291068 system_pods.go:89] "etcd-addons-303264" [a11fd941-580f-4bf5-b3b0-f63f082b7ea4] Running
	I1016 18:33:46.517660  291068 system_pods.go:89] "kindnet-mbblc" [7b8d0f9b-d177-4af5-85b5-ccd94f3a0449] Running
	I1016 18:33:46.517665  291068 system_pods.go:89] "kube-apiserver-addons-303264" [f18f501b-1831-40e2-8f9d-e5e92fa0b9dc] Running
	I1016 18:33:46.517681  291068 system_pods.go:89] "kube-controller-manager-addons-303264" [c1f2a093-2eb1-48d4-90ce-74fb0a24ee8a] Running
	I1016 18:33:46.517693  291068 system_pods.go:89] "kube-ingress-dns-minikube" [4c985e3a-06af-43df-b8cb-3e52efd16bcb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1016 18:33:46.517698  291068 system_pods.go:89] "kube-proxy-vfskf" [a0e25247-8b51-483a-8f53-8243d41ef9b5] Running
	I1016 18:33:46.517703  291068 system_pods.go:89] "kube-scheduler-addons-303264" [f7908d6d-be06-4cbf-8b15-7b43f4c72627] Running
	I1016 18:33:46.517709  291068 system_pods.go:89] "metrics-server-85b7d694d7-2pqhh" [39e00c5f-539c-4f89-8610-7975265868ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 18:33:46.517718  291068 system_pods.go:89] "nvidia-device-plugin-daemonset-frsg8" [9b71f6fc-8aad-4d80-b73c-bc6df9bd0a6d] Pending
	I1016 18:33:46.517725  291068 system_pods.go:89] "registry-6b586f9694-tt65k" [25f718b4-be75-437f-a793-49619e3a4306] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 18:33:46.517730  291068 system_pods.go:89] "registry-creds-764b6fb674-25wdq" [2264cbde-5cda-424e-8a82-3fc4b7eeafe2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 18:33:46.517739  291068 system_pods.go:89] "registry-proxy-jktvf" [e60cff58-6e3a-4e66-90e2-ebcb83be567a] Pending
	I1016 18:33:46.517747  291068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7ncgr" [225729d2-76cb-40c0-bba9-78908c09c591] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 18:33:46.517754  291068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9gxlc" [81b637c9-900e-4ffd-92fb-785bc9414d6f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 18:33:46.517763  291068 system_pods.go:89] "storage-provisioner" [4bd1d8bb-9204-4426-a2be-f6fd29a6f308] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:33:46.517779  291068 retry.go:31] will retry after 261.532262ms: missing components: kube-dns
	I1016 18:33:46.616822  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:46.618229  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:46.784580  291068 system_pods.go:86] 19 kube-system pods found
	I1016 18:33:46.784617  291068 system_pods.go:89] "coredns-66bc5c9577-8ztvw" [39553a90-b0aa-4683-abfe-867cb5c35ca2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:33:46.784627  291068 system_pods.go:89] "csi-hostpath-attacher-0" [9778b6d4-35ad-4e1a-9cf9-e68872db8da2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1016 18:33:46.784636  291068 system_pods.go:89] "csi-hostpath-resizer-0" [fbd0e89f-2c7d-4789-9747-9c121ae74bf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1016 18:33:46.784651  291068 system_pods.go:89] "csi-hostpathplugin-5z9bs" [03d5d6c8-db8c-449a-ba7a-8bdb9825c3a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1016 18:33:46.784659  291068 system_pods.go:89] "etcd-addons-303264" [a11fd941-580f-4bf5-b3b0-f63f082b7ea4] Running
	I1016 18:33:46.784665  291068 system_pods.go:89] "kindnet-mbblc" [7b8d0f9b-d177-4af5-85b5-ccd94f3a0449] Running
	I1016 18:33:46.784670  291068 system_pods.go:89] "kube-apiserver-addons-303264" [f18f501b-1831-40e2-8f9d-e5e92fa0b9dc] Running
	I1016 18:33:46.784678  291068 system_pods.go:89] "kube-controller-manager-addons-303264" [c1f2a093-2eb1-48d4-90ce-74fb0a24ee8a] Running
	I1016 18:33:46.784685  291068 system_pods.go:89] "kube-ingress-dns-minikube" [4c985e3a-06af-43df-b8cb-3e52efd16bcb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1016 18:33:46.784692  291068 system_pods.go:89] "kube-proxy-vfskf" [a0e25247-8b51-483a-8f53-8243d41ef9b5] Running
	I1016 18:33:46.784697  291068 system_pods.go:89] "kube-scheduler-addons-303264" [f7908d6d-be06-4cbf-8b15-7b43f4c72627] Running
	I1016 18:33:46.784703  291068 system_pods.go:89] "metrics-server-85b7d694d7-2pqhh" [39e00c5f-539c-4f89-8610-7975265868ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 18:33:46.784709  291068 system_pods.go:89] "nvidia-device-plugin-daemonset-frsg8" [9b71f6fc-8aad-4d80-b73c-bc6df9bd0a6d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1016 18:33:46.784719  291068 system_pods.go:89] "registry-6b586f9694-tt65k" [25f718b4-be75-437f-a793-49619e3a4306] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 18:33:46.784725  291068 system_pods.go:89] "registry-creds-764b6fb674-25wdq" [2264cbde-5cda-424e-8a82-3fc4b7eeafe2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 18:33:46.784731  291068 system_pods.go:89] "registry-proxy-jktvf" [e60cff58-6e3a-4e66-90e2-ebcb83be567a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1016 18:33:46.784738  291068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7ncgr" [225729d2-76cb-40c0-bba9-78908c09c591] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 18:33:46.784749  291068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9gxlc" [81b637c9-900e-4ffd-92fb-785bc9414d6f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 18:33:46.784755  291068 system_pods.go:89] "storage-provisioner" [4bd1d8bb-9204-4426-a2be-f6fd29a6f308] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:33:46.784772  291068 retry.go:31] will retry after 406.660384ms: missing components: kube-dns
	I1016 18:33:46.823940  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:46.831156  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:47.072370  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:47.089562  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:47.197344  291068 system_pods.go:86] 19 kube-system pods found
	I1016 18:33:47.197386  291068 system_pods.go:89] "coredns-66bc5c9577-8ztvw" [39553a90-b0aa-4683-abfe-867cb5c35ca2] Running
	I1016 18:33:47.197397  291068 system_pods.go:89] "csi-hostpath-attacher-0" [9778b6d4-35ad-4e1a-9cf9-e68872db8da2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1016 18:33:47.197431  291068 system_pods.go:89] "csi-hostpath-resizer-0" [fbd0e89f-2c7d-4789-9747-9c121ae74bf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1016 18:33:47.197446  291068 system_pods.go:89] "csi-hostpathplugin-5z9bs" [03d5d6c8-db8c-449a-ba7a-8bdb9825c3a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1016 18:33:47.197452  291068 system_pods.go:89] "etcd-addons-303264" [a11fd941-580f-4bf5-b3b0-f63f082b7ea4] Running
	I1016 18:33:47.197457  291068 system_pods.go:89] "kindnet-mbblc" [7b8d0f9b-d177-4af5-85b5-ccd94f3a0449] Running
	I1016 18:33:47.197465  291068 system_pods.go:89] "kube-apiserver-addons-303264" [f18f501b-1831-40e2-8f9d-e5e92fa0b9dc] Running
	I1016 18:33:47.197470  291068 system_pods.go:89] "kube-controller-manager-addons-303264" [c1f2a093-2eb1-48d4-90ce-74fb0a24ee8a] Running
	I1016 18:33:47.197476  291068 system_pods.go:89] "kube-ingress-dns-minikube" [4c985e3a-06af-43df-b8cb-3e52efd16bcb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1016 18:33:47.197511  291068 system_pods.go:89] "kube-proxy-vfskf" [a0e25247-8b51-483a-8f53-8243d41ef9b5] Running
	I1016 18:33:47.197525  291068 system_pods.go:89] "kube-scheduler-addons-303264" [f7908d6d-be06-4cbf-8b15-7b43f4c72627] Running
	I1016 18:33:47.197532  291068 system_pods.go:89] "metrics-server-85b7d694d7-2pqhh" [39e00c5f-539c-4f89-8610-7975265868ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 18:33:47.197539  291068 system_pods.go:89] "nvidia-device-plugin-daemonset-frsg8" [9b71f6fc-8aad-4d80-b73c-bc6df9bd0a6d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1016 18:33:47.197551  291068 system_pods.go:89] "registry-6b586f9694-tt65k" [25f718b4-be75-437f-a793-49619e3a4306] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 18:33:47.197557  291068 system_pods.go:89] "registry-creds-764b6fb674-25wdq" [2264cbde-5cda-424e-8a82-3fc4b7eeafe2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 18:33:47.197566  291068 system_pods.go:89] "registry-proxy-jktvf" [e60cff58-6e3a-4e66-90e2-ebcb83be567a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1016 18:33:47.197591  291068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7ncgr" [225729d2-76cb-40c0-bba9-78908c09c591] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 18:33:47.197607  291068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9gxlc" [81b637c9-900e-4ffd-92fb-785bc9414d6f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 18:33:47.197612  291068 system_pods.go:89] "storage-provisioner" [4bd1d8bb-9204-4426-a2be-f6fd29a6f308] Running
	I1016 18:33:47.197636  291068 system_pods.go:126] duration metric: took 977.967894ms to wait for k8s-apps to be running ...
	I1016 18:33:47.197646  291068 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 18:33:47.197720  291068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:33:47.223236  291068 system_svc.go:56] duration metric: took 25.580082ms WaitForService to wait for kubelet
	I1016 18:33:47.223262  291068 kubeadm.go:586] duration metric: took 43.300508563s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:33:47.223300  291068 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:33:47.226425  291068 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 18:33:47.226459  291068 node_conditions.go:123] node cpu capacity is 2
	I1016 18:33:47.226473  291068 node_conditions.go:105] duration metric: took 3.160904ms to run NodePressure ...
	I1016 18:33:47.226508  291068 start.go:241] waiting for startup goroutines ...
	I1016 18:33:47.323552  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:47.328312  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:47.569884  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:47.589874  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:47.824226  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:47.827829  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:48.073896  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:48.090157  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:48.324266  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:48.327448  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:48.569637  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:48.589608  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:48.824148  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:48.828610  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:48.953903  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:49.070248  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:49.098310  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:49.323878  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:49.327888  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:49.570957  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:49.589909  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:49.823393  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:49.827824  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:50.073810  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:50.090109  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:50.094174  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.140228403s)
	W1016 18:33:50.094259  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:50.094292  291068 retry.go:31] will retry after 10.297449613s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:50.323296  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:50.327620  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:50.570208  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:50.590409  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:50.824167  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:50.827669  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:51.070682  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:51.090624  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:51.323886  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:51.327861  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:51.570851  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:51.602571  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:51.824639  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:51.830934  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:52.074305  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:52.097871  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:52.329810  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:52.330593  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:52.572861  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:52.590790  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:52.824085  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:52.828901  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:53.073588  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:53.091175  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:53.325694  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:53.328976  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:53.578647  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:53.589676  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:53.824611  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:53.828644  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:54.070618  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:54.089784  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:54.323935  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:54.327918  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:54.570599  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:54.589696  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:54.824314  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:54.828054  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:55.074305  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:55.090420  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:55.323925  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:55.327976  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:55.570083  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:55.589684  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:55.823735  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:55.828625  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:56.073218  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:56.089810  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:56.324031  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:56.328134  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:56.570165  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:56.589791  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:56.824286  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:56.828070  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:57.070739  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:57.089435  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:57.324419  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:57.328454  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:57.569415  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:57.589274  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:57.823746  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:57.827413  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:58.074242  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:58.089804  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:58.325263  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:58.328020  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:58.570480  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:58.589606  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:58.824494  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:58.828220  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:59.071765  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:59.089828  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:59.324135  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:59.328205  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:59.570765  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:59.589620  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:59.824393  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:59.828251  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:00.107140  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:00.109429  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:00.329120  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:00.332455  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:00.392800  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:34:00.570115  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:00.590315  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:00.823374  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:00.828200  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:01.070674  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:01.089277  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:01.324483  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:01.328419  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:01.501551  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.108709299s)
	W1016 18:34:01.501590  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:34:01.501608  291068 retry.go:31] will retry after 16.143036034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:34:01.577914  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:01.610499  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:01.824485  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:01.828459  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:02.070785  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:02.088974  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:02.324370  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:02.328318  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:02.569818  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:02.591097  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:02.823910  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:02.827465  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:03.084966  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:03.101206  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:03.411424  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:03.411679  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:03.569859  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:03.589299  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:03.823943  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:03.827684  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:04.069925  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:04.089240  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:04.323676  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:04.327471  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:04.569686  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:04.589210  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:04.823276  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:04.827773  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:05.071355  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:05.089921  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:05.324465  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:05.330449  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:05.570011  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:05.589316  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:05.827118  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:05.828839  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:06.070583  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:06.088762  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:06.324072  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:06.327629  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:06.569850  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:06.588886  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:06.824251  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:06.827827  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:07.077627  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:07.089235  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:07.325360  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:07.332140  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:07.568948  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:07.589411  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:07.823598  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:07.828653  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:08.078024  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:08.088828  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:08.326251  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:08.328819  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:08.570219  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:08.588788  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:08.824152  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:08.828134  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:09.074336  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:09.095304  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:09.323382  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:09.327825  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:09.570141  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:09.589547  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:09.823617  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:09.828176  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:10.070947  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:10.102322  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:10.323872  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:10.327539  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:10.570051  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:10.590933  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:10.824369  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:10.828594  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:11.072744  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:11.090478  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:11.324330  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:11.327720  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:11.570098  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:11.592740  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:11.824524  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:11.828253  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:12.069928  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:12.088900  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:12.324186  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:12.328085  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:12.569621  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:12.590137  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:12.824525  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:12.828290  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:13.070054  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:13.089825  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:13.325403  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:13.327801  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:13.570439  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:13.589008  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:13.824378  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:13.828153  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:14.070937  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:14.103738  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:14.324418  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:14.328023  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:14.574503  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:14.598166  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:14.824911  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:14.827630  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:15.073966  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:15.091730  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:15.325309  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:15.328399  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:15.569771  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:15.588887  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:15.825699  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:15.828114  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:16.070770  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:16.089351  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:16.323669  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:16.327421  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:16.569323  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:16.590092  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:16.824486  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:16.828168  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:17.070686  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:17.088918  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:17.325199  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:17.426125  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:17.569117  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:17.589186  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:17.645564  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:34:17.824402  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:17.828250  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:18.072637  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:18.091773  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:18.324665  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:18.329093  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:18.571249  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:18.588936  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:18.707635  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.06203214s)
	W1016 18:34:18.707669  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:34:18.707690  291068 retry.go:31] will retry after 43.779470207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:34:18.823952  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:18.828426  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:19.071333  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:19.089925  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:19.324115  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:19.327756  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:19.570527  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:19.589175  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:19.824913  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:19.827862  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:20.075019  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:20.090053  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:20.323981  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:20.329565  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:20.569772  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:20.589164  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:20.823770  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:20.827735  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:21.074886  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:21.094149  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:21.323591  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:21.328541  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:21.570353  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:21.591033  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:21.825101  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:21.827598  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:22.070134  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:22.090169  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:22.323881  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:22.327416  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:22.569877  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:22.589728  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:22.823782  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:22.827818  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:23.070972  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:23.089757  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:23.325643  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:23.328152  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:23.570998  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:23.589949  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:23.824274  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:23.828077  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:24.070694  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:24.088806  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:24.324001  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:24.327683  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:24.570550  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:24.589347  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:24.823343  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:24.828100  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:25.069629  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:25.088890  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:25.324693  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:25.328654  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:25.575940  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:25.589465  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:25.823672  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:25.827435  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:26.070925  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:26.092476  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:26.323729  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:26.331950  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:26.569951  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:26.589553  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:26.823464  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:26.828057  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:27.071299  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:27.090539  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:27.324590  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:27.328372  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:27.570650  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:27.589007  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:27.824193  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:27.827473  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:28.071851  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:28.093912  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:28.325442  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:28.337254  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:28.571296  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:28.591299  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:28.824670  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:28.828477  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:29.069739  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:29.088748  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:29.324617  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:29.328244  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:29.569454  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:29.589083  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:29.824038  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:29.827479  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:30.098703  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:30.100106  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:30.327106  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:30.328594  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:30.569630  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:30.589391  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:30.824481  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:30.828221  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:31.072560  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:31.089016  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:31.330587  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:31.330766  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:31.570317  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:31.589623  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:31.823794  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:31.828262  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:32.069253  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:32.089365  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:32.324406  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:32.327976  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:32.570232  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:32.589841  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:32.824846  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:32.827791  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:33.070398  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:33.089584  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:33.324618  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:33.328495  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:33.570426  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:33.590776  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:33.829543  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:33.829675  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:34.069834  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:34.089399  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:34.323384  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:34.328374  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:34.569341  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:34.589558  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:34.823610  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:34.828390  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:35.071476  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:35.089015  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:35.323999  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:35.327670  291068 kapi.go:107] duration metric: took 1m25.002972197s to wait for kubernetes.io/minikube-addons=registry ...
	I1016 18:34:35.569847  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:35.588866  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:35.824196  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:36.069983  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:36.089265  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:36.326162  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:36.570629  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:36.589221  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:36.824156  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:37.070655  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:37.090465  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:37.323806  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:37.569547  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:37.589692  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:37.828924  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:38.072336  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:38.091062  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:38.324089  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:38.570694  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:38.589024  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:38.823866  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:39.071644  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:39.089786  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:39.325057  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:39.569401  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:39.590608  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:39.823820  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:40.088077  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:40.091533  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:40.325628  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:40.570700  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:40.589542  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:40.823708  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:41.082495  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:41.095970  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:41.327806  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:41.572922  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:41.622486  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:41.824262  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:42.069806  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:42.090127  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:42.327138  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:42.569488  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:42.589300  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:42.824690  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:43.071095  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:43.090071  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:43.327306  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:43.570525  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:43.589486  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:43.824014  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:44.070380  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:44.089411  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:44.323621  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:44.578329  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:44.610442  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:44.824115  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:45.101646  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:45.103027  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:45.328284  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:45.569861  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:45.591273  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:45.823411  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:46.069861  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:46.089846  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:46.323813  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:46.569912  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:46.589017  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:46.824331  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:47.069759  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:47.088921  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:47.324976  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:47.570244  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:47.590477  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:47.829234  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:48.069994  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:48.089627  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:48.324672  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:48.571237  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:48.590431  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:48.823894  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:49.075079  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:49.092979  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:49.330784  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:49.570696  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:49.589102  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:49.825196  291068 kapi.go:107] duration metric: took 1m39.504874132s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1016 18:34:50.075563  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:50.176322  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:50.570114  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:50.590284  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:51.075987  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:51.089088  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:51.570209  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:51.590033  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:52.071255  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:52.089789  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:52.569850  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:52.589629  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:53.070359  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:53.089800  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:53.570404  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:53.589585  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:54.070130  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:54.089011  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:54.569547  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:54.588712  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:55.069670  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:55.088969  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:55.569542  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:55.589695  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:56.070192  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:56.091487  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:56.570090  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:56.589111  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:57.070649  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:57.089097  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:57.570077  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:57.589928  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:58.071026  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:58.089283  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:58.570220  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:58.589567  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:59.069728  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:59.088760  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:59.569814  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:59.588693  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:00.094631  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:35:00.120907  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:00.570160  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:35:00.590032  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:01.069921  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:35:01.089966  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:01.571931  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:35:01.672797  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:02.072479  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:35:02.088870  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:02.487368  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:35:02.574151  291068 kapi.go:107] duration metric: took 1m48.507889147s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1016 18:35:02.577174  291068 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-303264 cluster.
	I1016 18:35:02.579994  291068 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1016 18:35:02.582667  291068 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1016 18:35:02.589097  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:03.089189  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:03.590231  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:03.599980  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.112514841s)
	W1016 18:35:03.600070  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1016 18:35:03.600317  291068 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1016 18:35:04.091366  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:04.590169  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:05.089910  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:05.589912  291068 kapi.go:107] duration metric: took 1m55.004401735s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1016 18:35:05.593098  291068 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, amd-gpu-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1016 18:35:05.596160  291068 addons.go:514] duration metric: took 2m1.6730127s for enable addons: enabled=[registry-creds nvidia-device-plugin storage-provisioner cloud-spanner ingress-dns amd-gpu-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1016 18:35:05.596211  291068 start.go:246] waiting for cluster config update ...
	I1016 18:35:05.596233  291068 start.go:255] writing updated cluster config ...
	I1016 18:35:05.596533  291068 ssh_runner.go:195] Run: rm -f paused
	I1016 18:35:05.601027  291068 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:35:05.604849  291068 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8ztvw" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:05.612083  291068 pod_ready.go:94] pod "coredns-66bc5c9577-8ztvw" is "Ready"
	I1016 18:35:05.612112  291068 pod_ready.go:86] duration metric: took 7.234407ms for pod "coredns-66bc5c9577-8ztvw" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:05.614755  291068 pod_ready.go:83] waiting for pod "etcd-addons-303264" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:05.619653  291068 pod_ready.go:94] pod "etcd-addons-303264" is "Ready"
	I1016 18:35:05.619682  291068 pod_ready.go:86] duration metric: took 4.898085ms for pod "etcd-addons-303264" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:05.621982  291068 pod_ready.go:83] waiting for pod "kube-apiserver-addons-303264" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:05.626664  291068 pod_ready.go:94] pod "kube-apiserver-addons-303264" is "Ready"
	I1016 18:35:05.626695  291068 pod_ready.go:86] duration metric: took 4.688272ms for pod "kube-apiserver-addons-303264" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:05.630141  291068 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-303264" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:06.005066  291068 pod_ready.go:94] pod "kube-controller-manager-addons-303264" is "Ready"
	I1016 18:35:06.005093  291068 pod_ready.go:86] duration metric: took 374.920822ms for pod "kube-controller-manager-addons-303264" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:06.207516  291068 pod_ready.go:83] waiting for pod "kube-proxy-vfskf" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:06.605079  291068 pod_ready.go:94] pod "kube-proxy-vfskf" is "Ready"
	I1016 18:35:06.605106  291068 pod_ready.go:86] duration metric: took 397.561683ms for pod "kube-proxy-vfskf" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:06.805257  291068 pod_ready.go:83] waiting for pod "kube-scheduler-addons-303264" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:07.209937  291068 pod_ready.go:94] pod "kube-scheduler-addons-303264" is "Ready"
	I1016 18:35:07.209969  291068 pod_ready.go:86] duration metric: took 404.683648ms for pod "kube-scheduler-addons-303264" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:07.209983  291068 pod_ready.go:40] duration metric: took 1.608920977s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:35:07.265830  291068 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1016 18:35:07.269126  291068 out.go:179] * Done! kubectl is now configured to use "addons-303264" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 16 18:37:58 addons-303264 crio[833]: time="2025-10-16T18:37:58.847690627Z" level=info msg="Removed container ac58464518d462977a172dd9e2ebebca93adeb98a1bdedb1e869f3ebb5c6b270: kube-system/registry-creds-764b6fb674-25wdq/registry-creds" id=316b8179-a041-46f3-9259-fbd0b6e8fc91 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:38:07 addons-303264 crio[833]: time="2025-10-16T18:38:07.463056503Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-rl4jm/POD" id=f87bc5d2-cacf-4083-8a87-342629da0bc3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:38:07 addons-303264 crio[833]: time="2025-10-16T18:38:07.463138956Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:38:07 addons-303264 crio[833]: time="2025-10-16T18:38:07.471480992Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-rl4jm Namespace:default ID:e9d220601b4844b940e80a2568c2793599323f6aead63e4c54b76f33c0838e4f UID:01db452e-87ad-46ab-ba2d-b4fb69a76940 NetNS:/var/run/netns/476e2ab7-60a7-410c-a27b-eb2245c1e296 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002499088}] Aliases:map[]}"
	Oct 16 18:38:07 addons-303264 crio[833]: time="2025-10-16T18:38:07.471523602Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-rl4jm to CNI network \"kindnet\" (type=ptp)"
	Oct 16 18:38:07 addons-303264 crio[833]: time="2025-10-16T18:38:07.495185887Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-rl4jm Namespace:default ID:e9d220601b4844b940e80a2568c2793599323f6aead63e4c54b76f33c0838e4f UID:01db452e-87ad-46ab-ba2d-b4fb69a76940 NetNS:/var/run/netns/476e2ab7-60a7-410c-a27b-eb2245c1e296 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002499088}] Aliases:map[]}"
	Oct 16 18:38:07 addons-303264 crio[833]: time="2025-10-16T18:38:07.495378282Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-rl4jm for CNI network kindnet (type=ptp)"
	Oct 16 18:38:07 addons-303264 crio[833]: time="2025-10-16T18:38:07.500207112Z" level=info msg="Ran pod sandbox e9d220601b4844b940e80a2568c2793599323f6aead63e4c54b76f33c0838e4f with infra container: default/hello-world-app-5d498dc89-rl4jm/POD" id=f87bc5d2-cacf-4083-8a87-342629da0bc3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:38:07 addons-303264 crio[833]: time="2025-10-16T18:38:07.502899096Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=4da269bf-9267-4014-b78d-c4faceb1638e name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:38:07 addons-303264 crio[833]: time="2025-10-16T18:38:07.503044311Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=4da269bf-9267-4014-b78d-c4faceb1638e name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:38:07 addons-303264 crio[833]: time="2025-10-16T18:38:07.503091015Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=4da269bf-9267-4014-b78d-c4faceb1638e name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:38:07 addons-303264 crio[833]: time="2025-10-16T18:38:07.512278995Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=b3119500-ac48-46ea-8462-0a2c88a4fd46 name=/runtime.v1.ImageService/PullImage
	Oct 16 18:38:07 addons-303264 crio[833]: time="2025-10-16T18:38:07.514957637Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 16 18:38:08 addons-303264 crio[833]: time="2025-10-16T18:38:08.150281936Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=b3119500-ac48-46ea-8462-0a2c88a4fd46 name=/runtime.v1.ImageService/PullImage
	Oct 16 18:38:08 addons-303264 crio[833]: time="2025-10-16T18:38:08.15123102Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=68ac98f2-2a6e-4b7f-895b-a604501ec884 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:38:08 addons-303264 crio[833]: time="2025-10-16T18:38:08.155615549Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d01593b6-18c2-4a9a-8022-c2e3a0deb088 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:38:08 addons-303264 crio[833]: time="2025-10-16T18:38:08.164271859Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-rl4jm/hello-world-app" id=c76e4e52-33e3-47b5-aa00-bdf5868a9d24 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:38:08 addons-303264 crio[833]: time="2025-10-16T18:38:08.165036375Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:38:08 addons-303264 crio[833]: time="2025-10-16T18:38:08.193564875Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:38:08 addons-303264 crio[833]: time="2025-10-16T18:38:08.200536944Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/213cd499c19b449d361930136301786f77e1fe8f760133ed312c6f5eaa6edcf2/merged/etc/passwd: no such file or directory"
	Oct 16 18:38:08 addons-303264 crio[833]: time="2025-10-16T18:38:08.200707677Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/213cd499c19b449d361930136301786f77e1fe8f760133ed312c6f5eaa6edcf2/merged/etc/group: no such file or directory"
	Oct 16 18:38:08 addons-303264 crio[833]: time="2025-10-16T18:38:08.216652464Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:38:08 addons-303264 crio[833]: time="2025-10-16T18:38:08.241437209Z" level=info msg="Created container cb84cb724bcbaf1a3fc34060f8a019d0d2ba702ce6bed2044478a3b72bdd202d: default/hello-world-app-5d498dc89-rl4jm/hello-world-app" id=c76e4e52-33e3-47b5-aa00-bdf5868a9d24 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:38:08 addons-303264 crio[833]: time="2025-10-16T18:38:08.245034452Z" level=info msg="Starting container: cb84cb724bcbaf1a3fc34060f8a019d0d2ba702ce6bed2044478a3b72bdd202d" id=1982de89-2305-49cf-a605-39ee702b7f89 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:38:08 addons-303264 crio[833]: time="2025-10-16T18:38:08.250608525Z" level=info msg="Started container" PID=7223 containerID=cb84cb724bcbaf1a3fc34060f8a019d0d2ba702ce6bed2044478a3b72bdd202d description=default/hello-world-app-5d498dc89-rl4jm/hello-world-app id=1982de89-2305-49cf-a605-39ee702b7f89 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e9d220601b4844b940e80a2568c2793599323f6aead63e4c54b76f33c0838e4f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	cb84cb724bcba       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   e9d220601b484       hello-world-app-5d498dc89-rl4jm             default
	2286f68a98d1a       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             10 seconds ago           Exited              registry-creds                           1                   83e7920a52955       registry-creds-764b6fb674-25wdq             kube-system
	3f979110f545a       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago            Running             nginx                                    0                   17d156f8e7d2f       nginx                                       default
	01b3bc8a867f9       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   0635dc8f3b8c2       busybox                                     default
	4c854724ff606       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   0cb215c7536bc       csi-hostpathplugin-5z9bs                    kube-system
	72c450061ca94       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   0cb215c7536bc       csi-hostpathplugin-5z9bs                    kube-system
	d3c44cd5669c9       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   0cb215c7536bc       csi-hostpathplugin-5z9bs                    kube-system
	ed8f5ff4c7d24       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   66f332b550973       gcp-auth-78565c9fb4-7stxd                   gcp-auth
	2fd75860dad3e       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   0cb215c7536bc       csi-hostpathplugin-5z9bs                    kube-system
	6eb687c5bd9ac       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   8b61339c51fb7       ingress-nginx-controller-675c5ddd98-l5ks7   ingress-nginx
	b85fa5b248e27       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   0cb215c7536bc       csi-hostpathplugin-5z9bs                    kube-system
	988ad1327faa5       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   da7d713ef51cc       gadget-xkdv7                                gadget
	817135be1fb12       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   0b1a0b740b239       csi-hostpath-attacher-0                     kube-system
	cc0546bd9d12a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   d205eed98147d       registry-proxy-jktvf                        kube-system
	9b0f87f3e3a62       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              patch                                    0                   e3dfa7a55055c       ingress-nginx-admission-patch-ndrbx         ingress-nginx
	83e350274adee       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   8c7880db89b97       nvidia-device-plugin-daemonset-frsg8        kube-system
	4d4a9d8e61179       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   0cb215c7536bc       csi-hostpathplugin-5z9bs                    kube-system
	725ad79381cb1       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   0fbb003d171d5       yakd-dashboard-5ff678cb9-qzhjz              yakd-dashboard
	f4b21b5d4fe92       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              create                                   0                   ed6255039ae21       ingress-nginx-admission-create-j7q4k        ingress-nginx
	96ac5bbeec4b1       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   3c603989b64c4       local-path-provisioner-648f6765c9-jzvjp     local-path-storage
	54a940e28a474       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              4 minutes ago            Running             csi-resizer                              0                   c5664f50e874c       csi-hostpath-resizer-0                      kube-system
	ddb9eebdec6b1       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   c04e6a698eb6b       registry-6b586f9694-tt65k                   kube-system
	42b57482939e2       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago            Running             minikube-ingress-dns                     0                   17a03812a9495       kube-ingress-dns-minikube                   kube-system
	563604467d1e7       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               4 minutes ago            Running             cloud-spanner-emulator                   0                   a139d7f50eb69       cloud-spanner-emulator-86bd5cbb97-jl554     default
	a1df688b216b8       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   ec83c190026b4       metrics-server-85b7d694d7-2pqhh             kube-system
	8049d0179c2ce       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   342d7268e20dc       snapshot-controller-7d9fbc56b8-9gxlc        kube-system
	2f9a34f263e49       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   8df027d1f4f07       snapshot-controller-7d9fbc56b8-7ncgr        kube-system
	a11803eed98f1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   979e33544a5ba       storage-provisioner                         kube-system
	2150dbabd80c7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   b34b5acd5c2d8       coredns-66bc5c9577-8ztvw                    kube-system
	a43557a0c4603       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago            Running             kube-proxy                               0                   78e312da20470       kube-proxy-vfskf                            kube-system
	3478855350e27       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   bbdf78d3a843e       kindnet-mbblc                               kube-system
	2f7b424d8bee4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   e2caa69fcfd51       kube-scheduler-addons-303264                kube-system
	060c04d69de0b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   c42615248c708       kube-apiserver-addons-303264                kube-system
	b9c25f79f72e1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   081eb00b5824d       kube-controller-manager-addons-303264       kube-system
	014826c0f016d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   fa4ac1df76f10       etcd-addons-303264                          kube-system
	
	
	==> coredns [2150dbabd80c70b27e2ffa366b6a76822ac0da6532eef17cae4daccd51271b0b] <==
	[INFO] 10.244.0.11:53699 - 22731 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001800115s
	[INFO] 10.244.0.11:53699 - 52122 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000106937s
	[INFO] 10.244.0.11:53699 - 16196 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000070818s
	[INFO] 10.244.0.11:44753 - 22368 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000134383s
	[INFO] 10.244.0.11:44753 - 22155 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000161263s
	[INFO] 10.244.0.11:35921 - 9455 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000103606s
	[INFO] 10.244.0.11:35921 - 9718 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000125275s
	[INFO] 10.244.0.11:59016 - 43478 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081091s
	[INFO] 10.244.0.11:59016 - 43034 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080106s
	[INFO] 10.244.0.11:54541 - 23156 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001326648s
	[INFO] 10.244.0.11:54541 - 22992 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001358697s
	[INFO] 10.244.0.11:43268 - 38611 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000156479s
	[INFO] 10.244.0.11:43268 - 38183 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000100939s
	[INFO] 10.244.0.21:58600 - 13277 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000215663s
	[INFO] 10.244.0.21:38237 - 30403 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00011145s
	[INFO] 10.244.0.21:54851 - 62294 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108939s
	[INFO] 10.244.0.21:48090 - 58170 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129837s
	[INFO] 10.244.0.21:41642 - 55017 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095753s
	[INFO] 10.244.0.21:43899 - 53944 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000100914s
	[INFO] 10.244.0.21:32818 - 60481 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001498465s
	[INFO] 10.244.0.21:45500 - 554 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002193158s
	[INFO] 10.244.0.21:38362 - 21090 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002937607s
	[INFO] 10.244.0.21:45280 - 42672 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.00259401s
	[INFO] 10.244.0.23:39461 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000157128s
	[INFO] 10.244.0.23:41775 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000264902s
	
	
	==> describe nodes <==
	Name:               addons-303264
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-303264
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=addons-303264
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_32_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-303264
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-303264"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:32:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-303264
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:38:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:38:04 +0000   Thu, 16 Oct 2025 18:32:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:38:04 +0000   Thu, 16 Oct 2025 18:32:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:38:04 +0000   Thu, 16 Oct 2025 18:32:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 18:38:04 +0000   Thu, 16 Oct 2025 18:33:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-303264
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                07b2a673-6498-471b-80f5-89e4ac06aded
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  default                     cloud-spanner-emulator-86bd5cbb97-jl554      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  default                     hello-world-app-5d498dc89-rl4jm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-xkdv7                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  gcp-auth                    gcp-auth-78565c9fb4-7stxd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-l5ks7    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m59s
	  kube-system                 coredns-66bc5c9577-8ztvw                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m5s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 csi-hostpathplugin-5z9bs                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 etcd-addons-303264                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m11s
	  kube-system                 kindnet-mbblc                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m5s
	  kube-system                 kube-apiserver-addons-303264                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-controller-manager-addons-303264        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-proxy-vfskf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-scheduler-addons-303264                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 metrics-server-85b7d694d7-2pqhh              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m
	  kube-system                 nvidia-device-plugin-daemonset-frsg8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 registry-6b586f9694-tt65k                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 registry-creds-764b6fb674-25wdq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 registry-proxy-jktvf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 snapshot-controller-7d9fbc56b8-7ncgr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 snapshot-controller-7d9fbc56b8-9gxlc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  local-path-storage          local-path-provisioner-648f6765c9-jzvjp      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-qzhjz               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m4s   kube-proxy       
	  Normal   Starting                 5m11s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m11s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m11s  kubelet          Node addons-303264 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m11s  kubelet          Node addons-303264 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m11s  kubelet          Node addons-303264 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m7s   node-controller  Node addons-303264 event: Registered Node addons-303264 in Controller
	  Normal   NodeReady                4m24s  kubelet          Node addons-303264 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct16 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015294] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510048] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035217] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.777829] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.353148] kauditd_printk_skb: 36 callbacks suppressed
	[Oct16 17:39] FS-Cache: Duplicate cookie detected
	[  +0.000746] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001056] FS-Cache: O-cookie d=00000000a1708097{9P.session} n=00000000c48db394
	[  +0.001150] FS-Cache: O-key=[10] '34323935323233313231'
	[  +0.000794] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000971] FS-Cache: N-cookie d=00000000a1708097{9P.session} n=0000000008f2874d
	[  +0.001104] FS-Cache: N-key=[10] '34323935323233313231'
	[Oct16 17:40] hrtimer: interrupt took 46683506 ns
	[Oct16 18:30] kauditd_printk_skb: 8 callbacks suppressed
	[Oct16 18:32] overlayfs: idmapped layers are currently not supported
	[  +0.067059] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [014826c0f016dd10054a3e938e96ca2dc16e3da7c51ac716d64785bc10883c23] <==
	{"level":"warn","ts":"2025-10-16T18:32:54.744477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.768901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.779071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.795833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.819860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.835636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.860082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.877943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.898609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.916947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.934554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.945905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.970804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.987703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.997909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:55.045851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:55.068366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:55.084225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:55.188126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:33:10.964842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:33:10.979245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:33:32.991801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:33:33.006276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:33:33.026805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:33:33.049454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38996","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [ed8f5ff4c7d2466aa759c52b8296a84524de7ba1e817213c099710ad380d71ef] <==
	2025/10/16 18:35:01 GCP Auth Webhook started!
	2025/10/16 18:35:07 Ready to marshal response ...
	2025/10/16 18:35:07 Ready to write response ...
	2025/10/16 18:35:08 Ready to marshal response ...
	2025/10/16 18:35:08 Ready to write response ...
	2025/10/16 18:35:08 Ready to marshal response ...
	2025/10/16 18:35:08 Ready to write response ...
	2025/10/16 18:35:30 Ready to marshal response ...
	2025/10/16 18:35:30 Ready to write response ...
	2025/10/16 18:35:35 Ready to marshal response ...
	2025/10/16 18:35:35 Ready to write response ...
	2025/10/16 18:35:35 Ready to marshal response ...
	2025/10/16 18:35:35 Ready to write response ...
	2025/10/16 18:35:45 Ready to marshal response ...
	2025/10/16 18:35:45 Ready to write response ...
	2025/10/16 18:35:47 Ready to marshal response ...
	2025/10/16 18:35:47 Ready to write response ...
	2025/10/16 18:36:01 Ready to marshal response ...
	2025/10/16 18:36:01 Ready to write response ...
	2025/10/16 18:36:20 Ready to marshal response ...
	2025/10/16 18:36:20 Ready to write response ...
	2025/10/16 18:38:07 Ready to marshal response ...
	2025/10/16 18:38:07 Ready to write response ...
	
	
	==> kernel <==
	 18:38:09 up  1:20,  0 user,  load average: 0.43, 1.65, 2.65
	Linux addons-303264 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3478855350e27312631cd476f6eb2db3e964996f54f9f6f384b530804abbc3ad] <==
	I1016 18:36:05.136811       1 main.go:301] handling current node
	I1016 18:36:15.136948       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:36:15.137060       1 main.go:301] handling current node
	I1016 18:36:25.137438       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:36:25.137582       1 main.go:301] handling current node
	I1016 18:36:35.137433       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:36:35.137468       1 main.go:301] handling current node
	I1016 18:36:45.137443       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:36:45.137484       1 main.go:301] handling current node
	I1016 18:36:55.137463       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:36:55.137499       1 main.go:301] handling current node
	I1016 18:37:05.136682       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:37:05.136712       1 main.go:301] handling current node
	I1016 18:37:15.136555       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:37:15.136609       1 main.go:301] handling current node
	I1016 18:37:25.137457       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:37:25.137498       1 main.go:301] handling current node
	I1016 18:37:35.137345       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:37:35.137402       1 main.go:301] handling current node
	I1016 18:37:45.136616       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:37:45.136738       1 main.go:301] handling current node
	I1016 18:37:55.141261       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:37:55.141300       1 main.go:301] handling current node
	I1016 18:38:05.137238       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:38:05.137277       1 main.go:301] handling current node
	
	
	==> kube-apiserver [060c04d69de0bc184bc8f947999dbdc731a26bde67d27b5ccc7d12c5160d6872] <==
	W1016 18:33:33.005067       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1016 18:33:33.026234       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1016 18:33:33.048819       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1016 18:33:45.668065       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.7.145:443: connect: connection refused
	E1016 18:33:45.668200       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.7.145:443: connect: connection refused" logger="UnhandledError"
	W1016 18:33:45.668873       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.7.145:443: connect: connection refused
	E1016 18:33:45.669007       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.7.145:443: connect: connection refused" logger="UnhandledError"
	W1016 18:33:45.763989       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.7.145:443: connect: connection refused
	E1016 18:33:45.765171       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.7.145:443: connect: connection refused" logger="UnhandledError"
	E1016 18:34:03.234820       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.211.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.211.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.211.125:443: connect: connection refused" logger="UnhandledError"
	W1016 18:34:03.235032       1 handler_proxy.go:99] no RequestInfo found in the context
	E1016 18:34:03.235088       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1016 18:34:03.351553       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1016 18:34:03.407243       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1016 18:35:18.496793       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60958: use of closed network connection
	E1016 18:35:18.729646       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60990: use of closed network connection
	E1016 18:35:18.855817       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:32786: use of closed network connection
	I1016 18:35:47.184583       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1016 18:35:47.488727       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.156.173"}
	I1016 18:36:13.232757       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1016 18:36:28.907535       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1016 18:38:07.364285       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.24.26"}
	
	
	==> kube-controller-manager [b9c25f79f72e12553a80f8e56a83533f0c92695295a4c2fefe60d0d43ea83f8c] <==
	I1016 18:33:02.985767       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 18:33:02.990000       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 18:33:02.990381       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 18:33:03.007550       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1016 18:33:03.016543       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1016 18:33:03.016668       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1016 18:33:03.016729       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1016 18:33:03.016924       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1016 18:33:03.017614       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1016 18:33:03.019248       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1016 18:33:03.019270       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1016 18:33:03.023556       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1016 18:33:03.028420       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1016 18:33:09.336208       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1016 18:33:32.979898       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1016 18:33:32.984046       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	E1016 18:33:33.034254       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1016 18:33:33.034388       1 shared_informer.go:682] "Warning: resync period is smaller than resync check period and the informer has already started. Changing it to the resync check period" resyncPeriod="19h10m34.188859875s" resyncCheckPeriod="19h55m27.132189845s"
	I1016 18:33:33.034423       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1016 18:33:33.034475       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1016 18:33:33.034502       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:33:33.085280       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 18:33:47.964592       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1016 18:34:03.039632       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1016 18:34:03.113755       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [a43557a0c460383dd11dbc546a8b05c541e5a54ece4dec48717534f0976d5b55] <==
	I1016 18:33:05.094209       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:33:05.195664       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:33:05.296281       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:33:05.296321       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1016 18:33:05.296387       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:33:05.337517       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:33:05.337570       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:33:05.356618       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:33:05.356949       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:33:05.356966       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:33:05.362643       1 config.go:200] "Starting service config controller"
	I1016 18:33:05.362663       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:33:05.362680       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:33:05.362685       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:33:05.362695       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:33:05.362703       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:33:05.363337       1 config.go:309] "Starting node config controller"
	I1016 18:33:05.363345       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:33:05.363351       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:33:05.463749       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 18:33:05.463783       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 18:33:05.463818       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2f7b424d8bee40bd1f116496f34f26e561c275a27e0ae071483edcb822d76d67] <==
	I1016 18:32:57.061981       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:32:57.066505       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 18:32:57.066631       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:32:57.066654       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:32:57.066671       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1016 18:32:57.074960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 18:32:57.077756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 18:32:57.077841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 18:32:57.077895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 18:32:57.077957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 18:32:57.078010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 18:32:57.078073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 18:32:57.081372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1016 18:32:57.081689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 18:32:57.081748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 18:32:57.081909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 18:32:57.081964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:32:57.082007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 18:32:57.082055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 18:32:57.082102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 18:32:57.082144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 18:32:57.082201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 18:32:57.082294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 18:32:57.083381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1016 18:32:58.067533       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 18:36:28 addons-303264 kubelet[1275]: I1016 18:36:28.913848    1275 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8d7901dd-5507-4748-b3fd-c2020dcf17c1-gcp-creds\") on node \"addons-303264\" DevicePath \"\""
	Oct 16 18:36:28 addons-303264 kubelet[1275]: I1016 18:36:28.913862    1275 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x6gwq\" (UniqueName: \"kubernetes.io/projected/8d7901dd-5507-4748-b3fd-c2020dcf17c1-kube-api-access-x6gwq\") on node \"addons-303264\" DevicePath \"\""
	Oct 16 18:36:28 addons-303264 kubelet[1275]: I1016 18:36:28.935037    1275 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-d061406b-5417-4f33-b946-84a28629dc4f" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^0233db46-aabf-11f0-918a-72a497fca669") on node "addons-303264"
	Oct 16 18:36:29 addons-303264 kubelet[1275]: I1016 18:36:29.015258    1275 reconciler_common.go:299] "Volume detached for volume \"pvc-d061406b-5417-4f33-b946-84a28629dc4f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^0233db46-aabf-11f0-918a-72a497fca669\") on node \"addons-303264\" DevicePath \"\""
	Oct 16 18:36:30 addons-303264 kubelet[1275]: I1016 18:36:30.686856    1275 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d7901dd-5507-4748-b3fd-c2020dcf17c1" path="/var/lib/kubelet/pods/8d7901dd-5507-4748-b3fd-c2020dcf17c1/volumes"
	Oct 16 18:36:49 addons-303264 kubelet[1275]: I1016 18:36:49.684287    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-jktvf" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 18:36:50 addons-303264 kubelet[1275]: I1016 18:36:50.683973    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-tt65k" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 18:36:58 addons-303264 kubelet[1275]: E1016 18:36:58.824663    1275 manager.go:1116] Failed to create existing container: /docker/039913fab7ea195304d0f4d96a7903eec2564008b2f73d8d1f43f3b9fb98e1c2/crio-08b623891f6fa636386c0e31cc3779973a6aade3104956a1eb3d8b7cbe0977db: Error finding container 08b623891f6fa636386c0e31cc3779973a6aade3104956a1eb3d8b7cbe0977db: Status 404 returned error can't find the container with id 08b623891f6fa636386c0e31cc3779973a6aade3104956a1eb3d8b7cbe0977db
	Oct 16 18:36:58 addons-303264 kubelet[1275]: E1016 18:36:58.827369    1275 manager.go:1116] Failed to create existing container: /crio-08b623891f6fa636386c0e31cc3779973a6aade3104956a1eb3d8b7cbe0977db: Error finding container 08b623891f6fa636386c0e31cc3779973a6aade3104956a1eb3d8b7cbe0977db: Status 404 returned error can't find the container with id 08b623891f6fa636386c0e31cc3779973a6aade3104956a1eb3d8b7cbe0977db
	Oct 16 18:37:03 addons-303264 kubelet[1275]: I1016 18:37:03.683866    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-frsg8" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 18:37:54 addons-303264 kubelet[1275]: I1016 18:37:54.683726    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-jktvf" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 18:37:55 addons-303264 kubelet[1275]: I1016 18:37:55.885511    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-25wdq" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 18:37:58 addons-303264 kubelet[1275]: I1016 18:37:58.214144    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-25wdq" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 18:37:58 addons-303264 kubelet[1275]: I1016 18:37:58.214204    1275 scope.go:117] "RemoveContainer" containerID="ac58464518d462977a172dd9e2ebebca93adeb98a1bdedb1e869f3ebb5c6b270"
	Oct 16 18:37:58 addons-303264 kubelet[1275]: E1016 18:37:58.824590    1275 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2ac161abc42ff409fd1d3d41c375158c576d965644006609d5e34d793361383f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2ac161abc42ff409fd1d3d41c375158c576d965644006609d5e34d793361383f/diff: no such file or directory, extraDiskErr: <nil>
	Oct 16 18:37:58 addons-303264 kubelet[1275]: I1016 18:37:58.834622    1275 scope.go:117] "RemoveContainer" containerID="ac58464518d462977a172dd9e2ebebca93adeb98a1bdedb1e869f3ebb5c6b270"
	Oct 16 18:37:59 addons-303264 kubelet[1275]: I1016 18:37:59.219160    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-25wdq" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 18:37:59 addons-303264 kubelet[1275]: I1016 18:37:59.219216    1275 scope.go:117] "RemoveContainer" containerID="2286f68a98d1a5284ca4c00fdcf68f8cca380cc262212d033a852ffc64571668"
	Oct 16 18:37:59 addons-303264 kubelet[1275]: E1016 18:37:59.219360    1275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-25wdq_kube-system(2264cbde-5cda-424e-8a82-3fc4b7eeafe2)\"" pod="kube-system/registry-creds-764b6fb674-25wdq" podUID="2264cbde-5cda-424e-8a82-3fc4b7eeafe2"
	Oct 16 18:38:00 addons-303264 kubelet[1275]: I1016 18:38:00.235727    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-25wdq" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 18:38:00 addons-303264 kubelet[1275]: I1016 18:38:00.236288    1275 scope.go:117] "RemoveContainer" containerID="2286f68a98d1a5284ca4c00fdcf68f8cca380cc262212d033a852ffc64571668"
	Oct 16 18:38:00 addons-303264 kubelet[1275]: E1016 18:38:00.236607    1275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-25wdq_kube-system(2264cbde-5cda-424e-8a82-3fc4b7eeafe2)\"" pod="kube-system/registry-creds-764b6fb674-25wdq" podUID="2264cbde-5cda-424e-8a82-3fc4b7eeafe2"
	Oct 16 18:38:07 addons-303264 kubelet[1275]: I1016 18:38:07.286923    1275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/01db452e-87ad-46ab-ba2d-b4fb69a76940-gcp-creds\") pod \"hello-world-app-5d498dc89-rl4jm\" (UID: \"01db452e-87ad-46ab-ba2d-b4fb69a76940\") " pod="default/hello-world-app-5d498dc89-rl4jm"
	Oct 16 18:38:07 addons-303264 kubelet[1275]: I1016 18:38:07.287687    1275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmpxz\" (UniqueName: \"kubernetes.io/projected/01db452e-87ad-46ab-ba2d-b4fb69a76940-kube-api-access-pmpxz\") pod \"hello-world-app-5d498dc89-rl4jm\" (UID: \"01db452e-87ad-46ab-ba2d-b4fb69a76940\") " pod="default/hello-world-app-5d498dc89-rl4jm"
	Oct 16 18:38:07 addons-303264 kubelet[1275]: W1016 18:38:07.497973    1275 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/039913fab7ea195304d0f4d96a7903eec2564008b2f73d8d1f43f3b9fb98e1c2/crio-e9d220601b4844b940e80a2568c2793599323f6aead63e4c54b76f33c0838e4f WatchSource:0}: Error finding container e9d220601b4844b940e80a2568c2793599323f6aead63e4c54b76f33c0838e4f: Status 404 returned error can't find the container with id e9d220601b4844b940e80a2568c2793599323f6aead63e4c54b76f33c0838e4f
	
	
	==> storage-provisioner [a11803eed98f15ecf4cde77e7c2e9a9c4a51e24bf968cd172db10b9cb9173b34] <==
	W1016 18:37:44.178445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:37:46.181707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:37:46.186011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:37:48.189280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:37:48.193383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:37:50.197220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:37:50.210255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:37:52.213310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:37:52.217796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:37:54.220672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:37:54.224808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:37:56.227638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:37:56.235736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:37:58.247322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:37:58.258543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:38:00.274740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:38:00.335519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:38:02.338736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:38:02.344134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:38:04.347412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:38:04.352711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:38:06.355652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:38:06.360440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:38:08.364758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:38:08.370078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-303264 -n addons-303264
helpers_test.go:269: (dbg) Run:  kubectl --context addons-303264 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-j7q4k ingress-nginx-admission-patch-ndrbx
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-303264 describe pod ingress-nginx-admission-create-j7q4k ingress-nginx-admission-patch-ndrbx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-303264 describe pod ingress-nginx-admission-create-j7q4k ingress-nginx-admission-patch-ndrbx: exit status 1 (108.824957ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-j7q4k" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ndrbx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-303264 describe pod ingress-nginx-admission-create-j7q4k ingress-nginx-admission-patch-ndrbx: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-303264 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-303264 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (284.164872ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:38:10.722231  300732 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:38:10.723925  300732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:38:10.723974  300732 out.go:374] Setting ErrFile to fd 2...
	I1016 18:38:10.723997  300732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:38:10.724316  300732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:38:10.724714  300732 mustload.go:65] Loading cluster: addons-303264
	I1016 18:38:10.725322  300732 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:38:10.725391  300732 addons.go:606] checking whether the cluster is paused
	I1016 18:38:10.725530  300732 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:38:10.725562  300732 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:38:10.726104  300732 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:38:10.744593  300732 ssh_runner.go:195] Run: systemctl --version
	I1016 18:38:10.744656  300732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:38:10.765672  300732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:38:10.867758  300732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:38:10.867875  300732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:38:10.902747  300732 cri.go:89] found id: "2286f68a98d1a5284ca4c00fdcf68f8cca380cc262212d033a852ffc64571668"
	I1016 18:38:10.902770  300732 cri.go:89] found id: "4c854724ff606415b7b4cd338cfdcba4ae2ed9545913afdbe77836a55f682630"
	I1016 18:38:10.902775  300732 cri.go:89] found id: "72c450061ca944aebcf21ba44cd0fb5c6faba231d5c3510d405f852f8c576446"
	I1016 18:38:10.902779  300732 cri.go:89] found id: "d3c44cd5669c90a23e68ca072b42ce384a3f474528fe2c9af093fd29c7c3fa1b"
	I1016 18:38:10.902787  300732 cri.go:89] found id: "2fd75860dad3eccbd0d79a17732d30758bd9d2456a835178445c635cbb925a8a"
	I1016 18:38:10.902792  300732 cri.go:89] found id: "b85fa5b248e27a71c1f12a3be974d1bdda3b4469c81daef49b7cfde0ffea797c"
	I1016 18:38:10.902795  300732 cri.go:89] found id: "817135be1fb1204992d3db557da6db2ccace5f73a469e16e6ef4a8d3a6538646"
	I1016 18:38:10.902798  300732 cri.go:89] found id: "cc0546bd9d12ac9715ff397c9b06b4fc5d1b8028491ba478a088e6e88b40010f"
	I1016 18:38:10.902801  300732 cri.go:89] found id: "83e350274adee6aabe6699937b3ee1da677b23930fb3f6a320244186014dc182"
	I1016 18:38:10.902807  300732 cri.go:89] found id: "4d4a9d8e6117902f1f0822f15f29b21a249dfee058117ef45732ff0ebbc9b63c"
	I1016 18:38:10.902810  300732 cri.go:89] found id: "54a940e28a47407c8dd3c7ff37cedcc6661f35e7010edab0a32f554dcebca95e"
	I1016 18:38:10.902814  300732 cri.go:89] found id: "ddb9eebdec6b1a8e687257395e11e928406b35550fba6ed6e91af596e7585f32"
	I1016 18:38:10.902818  300732 cri.go:89] found id: "42b57482939e2fd5f76685af64bbdfb293bceb35482b2bdc733c1573a63ac270"
	I1016 18:38:10.902821  300732 cri.go:89] found id: "a1df688b216b826cd54cb112e3dad71b1e97ae8c966ef26ed5c8ef3dd4b29aaa"
	I1016 18:38:10.902824  300732 cri.go:89] found id: "8049d0179c2ce30d32ea7f0beab524406581715f6d4f201e8e1f342170d48791"
	I1016 18:38:10.902829  300732 cri.go:89] found id: "2f9a34f263e49dc31cf9dc01ff9a56ba8c02307a08be02085e5ebc86366593ef"
	I1016 18:38:10.902838  300732 cri.go:89] found id: "a11803eed98f15ecf4cde77e7c2e9a9c4a51e24bf968cd172db10b9cb9173b34"
	I1016 18:38:10.902842  300732 cri.go:89] found id: "2150dbabd80c70b27e2ffa366b6a76822ac0da6532eef17cae4daccd51271b0b"
	I1016 18:38:10.902845  300732 cri.go:89] found id: "a43557a0c460383dd11dbc546a8b05c541e5a54ece4dec48717534f0976d5b55"
	I1016 18:38:10.902848  300732 cri.go:89] found id: "3478855350e27312631cd476f6eb2db3e964996f54f9f6f384b530804abbc3ad"
	I1016 18:38:10.902852  300732 cri.go:89] found id: "2f7b424d8bee40bd1f116496f34f26e561c275a27e0ae071483edcb822d76d67"
	I1016 18:38:10.902855  300732 cri.go:89] found id: "060c04d69de0bc184bc8f947999dbdc731a26bde67d27b5ccc7d12c5160d6872"
	I1016 18:38:10.902858  300732 cri.go:89] found id: "b9c25f79f72e12553a80f8e56a83533f0c92695295a4c2fefe60d0d43ea83f8c"
	I1016 18:38:10.902861  300732 cri.go:89] found id: "014826c0f016dd10054a3e938e96ca2dc16e3da7c51ac716d64785bc10883c23"
	I1016 18:38:10.902864  300732 cri.go:89] found id: ""
	I1016 18:38:10.902918  300732 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:38:10.925955  300732 out.go:203] 
	W1016 18:38:10.928829  300732 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:38:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:38:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:38:10.928859  300732 out.go:285] * 
	* 
	W1016 18:38:10.936556  300732 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:38:10.940165  300732 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-303264 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-303264 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-303264 addons disable ingress --alsologtostderr -v=1: exit status 11 (286.628334ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:38:11.005284  300775 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:38:11.006242  300775 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:38:11.006256  300775 out.go:374] Setting ErrFile to fd 2...
	I1016 18:38:11.006263  300775 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:38:11.006521  300775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:38:11.007668  300775 mustload.go:65] Loading cluster: addons-303264
	I1016 18:38:11.008064  300775 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:38:11.008079  300775 addons.go:606] checking whether the cluster is paused
	I1016 18:38:11.008180  300775 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:38:11.008197  300775 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:38:11.008679  300775 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:38:11.028465  300775 ssh_runner.go:195] Run: systemctl --version
	I1016 18:38:11.028535  300775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:38:11.047448  300775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:38:11.160379  300775 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:38:11.160472  300775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:38:11.194719  300775 cri.go:89] found id: "2286f68a98d1a5284ca4c00fdcf68f8cca380cc262212d033a852ffc64571668"
	I1016 18:38:11.194743  300775 cri.go:89] found id: "4c854724ff606415b7b4cd338cfdcba4ae2ed9545913afdbe77836a55f682630"
	I1016 18:38:11.194749  300775 cri.go:89] found id: "72c450061ca944aebcf21ba44cd0fb5c6faba231d5c3510d405f852f8c576446"
	I1016 18:38:11.194753  300775 cri.go:89] found id: "d3c44cd5669c90a23e68ca072b42ce384a3f474528fe2c9af093fd29c7c3fa1b"
	I1016 18:38:11.194757  300775 cri.go:89] found id: "2fd75860dad3eccbd0d79a17732d30758bd9d2456a835178445c635cbb925a8a"
	I1016 18:38:11.194761  300775 cri.go:89] found id: "b85fa5b248e27a71c1f12a3be974d1bdda3b4469c81daef49b7cfde0ffea797c"
	I1016 18:38:11.194764  300775 cri.go:89] found id: "817135be1fb1204992d3db557da6db2ccace5f73a469e16e6ef4a8d3a6538646"
	I1016 18:38:11.194767  300775 cri.go:89] found id: "cc0546bd9d12ac9715ff397c9b06b4fc5d1b8028491ba478a088e6e88b40010f"
	I1016 18:38:11.194770  300775 cri.go:89] found id: "83e350274adee6aabe6699937b3ee1da677b23930fb3f6a320244186014dc182"
	I1016 18:38:11.194777  300775 cri.go:89] found id: "4d4a9d8e6117902f1f0822f15f29b21a249dfee058117ef45732ff0ebbc9b63c"
	I1016 18:38:11.194780  300775 cri.go:89] found id: "54a940e28a47407c8dd3c7ff37cedcc6661f35e7010edab0a32f554dcebca95e"
	I1016 18:38:11.194785  300775 cri.go:89] found id: "ddb9eebdec6b1a8e687257395e11e928406b35550fba6ed6e91af596e7585f32"
	I1016 18:38:11.194788  300775 cri.go:89] found id: "42b57482939e2fd5f76685af64bbdfb293bceb35482b2bdc733c1573a63ac270"
	I1016 18:38:11.194791  300775 cri.go:89] found id: "a1df688b216b826cd54cb112e3dad71b1e97ae8c966ef26ed5c8ef3dd4b29aaa"
	I1016 18:38:11.194795  300775 cri.go:89] found id: "8049d0179c2ce30d32ea7f0beab524406581715f6d4f201e8e1f342170d48791"
	I1016 18:38:11.194805  300775 cri.go:89] found id: "2f9a34f263e49dc31cf9dc01ff9a56ba8c02307a08be02085e5ebc86366593ef"
	I1016 18:38:11.194813  300775 cri.go:89] found id: "a11803eed98f15ecf4cde77e7c2e9a9c4a51e24bf968cd172db10b9cb9173b34"
	I1016 18:38:11.194820  300775 cri.go:89] found id: "2150dbabd80c70b27e2ffa366b6a76822ac0da6532eef17cae4daccd51271b0b"
	I1016 18:38:11.194824  300775 cri.go:89] found id: "a43557a0c460383dd11dbc546a8b05c541e5a54ece4dec48717534f0976d5b55"
	I1016 18:38:11.194828  300775 cri.go:89] found id: "3478855350e27312631cd476f6eb2db3e964996f54f9f6f384b530804abbc3ad"
	I1016 18:38:11.194831  300775 cri.go:89] found id: "2f7b424d8bee40bd1f116496f34f26e561c275a27e0ae071483edcb822d76d67"
	I1016 18:38:11.194834  300775 cri.go:89] found id: "060c04d69de0bc184bc8f947999dbdc731a26bde67d27b5ccc7d12c5160d6872"
	I1016 18:38:11.194837  300775 cri.go:89] found id: "b9c25f79f72e12553a80f8e56a83533f0c92695295a4c2fefe60d0d43ea83f8c"
	I1016 18:38:11.194840  300775 cri.go:89] found id: "014826c0f016dd10054a3e938e96ca2dc16e3da7c51ac716d64785bc10883c23"
	I1016 18:38:11.194843  300775 cri.go:89] found id: ""
	I1016 18:38:11.194896  300775 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:38:11.217181  300775 out.go:203] 
	W1016 18:38:11.220026  300775 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:38:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:38:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:38:11.220052  300775 out.go:285] * 
	* 
	W1016 18:38:11.226524  300775 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:38:11.229539  300775 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-303264 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.35s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-xkdv7" [b11975b4-ce39-40be-832f-04e3bb9e747b] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.006444463s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-303264 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-303264 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (276.713789ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:36:36.004994  299580 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:36:36.005840  299580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:36:36.005887  299580 out.go:374] Setting ErrFile to fd 2...
	I1016 18:36:36.005907  299580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:36:36.006238  299580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:36:36.006550  299580 mustload.go:65] Loading cluster: addons-303264
	I1016 18:36:36.006949  299580 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:36:36.006993  299580 addons.go:606] checking whether the cluster is paused
	I1016 18:36:36.007122  299580 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:36:36.007165  299580 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:36:36.007646  299580 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:36:36.038103  299580 ssh_runner.go:195] Run: systemctl --version
	I1016 18:36:36.038176  299580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:36:36.056080  299580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:36:36.159529  299580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:36:36.159614  299580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:36:36.192113  299580 cri.go:89] found id: "4c854724ff606415b7b4cd338cfdcba4ae2ed9545913afdbe77836a55f682630"
	I1016 18:36:36.192186  299580 cri.go:89] found id: "72c450061ca944aebcf21ba44cd0fb5c6faba231d5c3510d405f852f8c576446"
	I1016 18:36:36.192207  299580 cri.go:89] found id: "d3c44cd5669c90a23e68ca072b42ce384a3f474528fe2c9af093fd29c7c3fa1b"
	I1016 18:36:36.192236  299580 cri.go:89] found id: "2fd75860dad3eccbd0d79a17732d30758bd9d2456a835178445c635cbb925a8a"
	I1016 18:36:36.192270  299580 cri.go:89] found id: "b85fa5b248e27a71c1f12a3be974d1bdda3b4469c81daef49b7cfde0ffea797c"
	I1016 18:36:36.192295  299580 cri.go:89] found id: "817135be1fb1204992d3db557da6db2ccace5f73a469e16e6ef4a8d3a6538646"
	I1016 18:36:36.192315  299580 cri.go:89] found id: "cc0546bd9d12ac9715ff397c9b06b4fc5d1b8028491ba478a088e6e88b40010f"
	I1016 18:36:36.192334  299580 cri.go:89] found id: "83e350274adee6aabe6699937b3ee1da677b23930fb3f6a320244186014dc182"
	I1016 18:36:36.192370  299580 cri.go:89] found id: "4d4a9d8e6117902f1f0822f15f29b21a249dfee058117ef45732ff0ebbc9b63c"
	I1016 18:36:36.192390  299580 cri.go:89] found id: "54a940e28a47407c8dd3c7ff37cedcc6661f35e7010edab0a32f554dcebca95e"
	I1016 18:36:36.192409  299580 cri.go:89] found id: "ddb9eebdec6b1a8e687257395e11e928406b35550fba6ed6e91af596e7585f32"
	I1016 18:36:36.192438  299580 cri.go:89] found id: "42b57482939e2fd5f76685af64bbdfb293bceb35482b2bdc733c1573a63ac270"
	I1016 18:36:36.192460  299580 cri.go:89] found id: "a1df688b216b826cd54cb112e3dad71b1e97ae8c966ef26ed5c8ef3dd4b29aaa"
	I1016 18:36:36.192479  299580 cri.go:89] found id: "8049d0179c2ce30d32ea7f0beab524406581715f6d4f201e8e1f342170d48791"
	I1016 18:36:36.192498  299580 cri.go:89] found id: "2f9a34f263e49dc31cf9dc01ff9a56ba8c02307a08be02085e5ebc86366593ef"
	I1016 18:36:36.192528  299580 cri.go:89] found id: "a11803eed98f15ecf4cde77e7c2e9a9c4a51e24bf968cd172db10b9cb9173b34"
	I1016 18:36:36.192560  299580 cri.go:89] found id: "2150dbabd80c70b27e2ffa366b6a76822ac0da6532eef17cae4daccd51271b0b"
	I1016 18:36:36.192580  299580 cri.go:89] found id: "a43557a0c460383dd11dbc546a8b05c541e5a54ece4dec48717534f0976d5b55"
	I1016 18:36:36.192614  299580 cri.go:89] found id: "3478855350e27312631cd476f6eb2db3e964996f54f9f6f384b530804abbc3ad"
	I1016 18:36:36.192637  299580 cri.go:89] found id: "2f7b424d8bee40bd1f116496f34f26e561c275a27e0ae071483edcb822d76d67"
	I1016 18:36:36.192661  299580 cri.go:89] found id: "060c04d69de0bc184bc8f947999dbdc731a26bde67d27b5ccc7d12c5160d6872"
	I1016 18:36:36.192694  299580 cri.go:89] found id: "b9c25f79f72e12553a80f8e56a83533f0c92695295a4c2fefe60d0d43ea83f8c"
	I1016 18:36:36.192715  299580 cri.go:89] found id: "014826c0f016dd10054a3e938e96ca2dc16e3da7c51ac716d64785bc10883c23"
	I1016 18:36:36.192732  299580 cri.go:89] found id: ""
	I1016 18:36:36.192815  299580 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:36:36.214472  299580 out.go:203] 
	W1016 18:36:36.217432  299580 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:36:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:36:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:36:36.217456  299580 out.go:285] * 
	* 
	W1016 18:36:36.223827  299580 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:36:36.226955  299580 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-303264 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.229411ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-2pqhh" [39e00c5f-539c-4f89-8610-7975265868ea] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004346291s
addons_test.go:463: (dbg) Run:  kubectl --context addons-303264 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-303264 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-303264 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (276.039637ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:35:46.658375  298451 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:35:46.659260  298451 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:35:46.659282  298451 out.go:374] Setting ErrFile to fd 2...
	I1016 18:35:46.659288  298451 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:35:46.659558  298451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:35:46.659887  298451 mustload.go:65] Loading cluster: addons-303264
	I1016 18:35:46.660269  298451 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:35:46.660289  298451 addons.go:606] checking whether the cluster is paused
	I1016 18:35:46.660396  298451 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:35:46.660417  298451 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:35:46.660879  298451 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:35:46.679324  298451 ssh_runner.go:195] Run: systemctl --version
	I1016 18:35:46.679398  298451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:35:46.713408  298451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:35:46.815699  298451 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:35:46.815790  298451 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:35:46.846679  298451 cri.go:89] found id: "4c854724ff606415b7b4cd338cfdcba4ae2ed9545913afdbe77836a55f682630"
	I1016 18:35:46.846704  298451 cri.go:89] found id: "72c450061ca944aebcf21ba44cd0fb5c6faba231d5c3510d405f852f8c576446"
	I1016 18:35:46.846710  298451 cri.go:89] found id: "d3c44cd5669c90a23e68ca072b42ce384a3f474528fe2c9af093fd29c7c3fa1b"
	I1016 18:35:46.846714  298451 cri.go:89] found id: "2fd75860dad3eccbd0d79a17732d30758bd9d2456a835178445c635cbb925a8a"
	I1016 18:35:46.846718  298451 cri.go:89] found id: "b85fa5b248e27a71c1f12a3be974d1bdda3b4469c81daef49b7cfde0ffea797c"
	I1016 18:35:46.846722  298451 cri.go:89] found id: "817135be1fb1204992d3db557da6db2ccace5f73a469e16e6ef4a8d3a6538646"
	I1016 18:35:46.846725  298451 cri.go:89] found id: "cc0546bd9d12ac9715ff397c9b06b4fc5d1b8028491ba478a088e6e88b40010f"
	I1016 18:35:46.846727  298451 cri.go:89] found id: "83e350274adee6aabe6699937b3ee1da677b23930fb3f6a320244186014dc182"
	I1016 18:35:46.846730  298451 cri.go:89] found id: "4d4a9d8e6117902f1f0822f15f29b21a249dfee058117ef45732ff0ebbc9b63c"
	I1016 18:35:46.846754  298451 cri.go:89] found id: "54a940e28a47407c8dd3c7ff37cedcc6661f35e7010edab0a32f554dcebca95e"
	I1016 18:35:46.846773  298451 cri.go:89] found id: "ddb9eebdec6b1a8e687257395e11e928406b35550fba6ed6e91af596e7585f32"
	I1016 18:35:46.846776  298451 cri.go:89] found id: "42b57482939e2fd5f76685af64bbdfb293bceb35482b2bdc733c1573a63ac270"
	I1016 18:35:46.846779  298451 cri.go:89] found id: "a1df688b216b826cd54cb112e3dad71b1e97ae8c966ef26ed5c8ef3dd4b29aaa"
	I1016 18:35:46.846782  298451 cri.go:89] found id: "8049d0179c2ce30d32ea7f0beab524406581715f6d4f201e8e1f342170d48791"
	I1016 18:35:46.846785  298451 cri.go:89] found id: "2f9a34f263e49dc31cf9dc01ff9a56ba8c02307a08be02085e5ebc86366593ef"
	I1016 18:35:46.846796  298451 cri.go:89] found id: "a11803eed98f15ecf4cde77e7c2e9a9c4a51e24bf968cd172db10b9cb9173b34"
	I1016 18:35:46.846803  298451 cri.go:89] found id: "2150dbabd80c70b27e2ffa366b6a76822ac0da6532eef17cae4daccd51271b0b"
	I1016 18:35:46.846808  298451 cri.go:89] found id: "a43557a0c460383dd11dbc546a8b05c541e5a54ece4dec48717534f0976d5b55"
	I1016 18:35:46.846811  298451 cri.go:89] found id: "3478855350e27312631cd476f6eb2db3e964996f54f9f6f384b530804abbc3ad"
	I1016 18:35:46.846815  298451 cri.go:89] found id: "2f7b424d8bee40bd1f116496f34f26e561c275a27e0ae071483edcb822d76d67"
	I1016 18:35:46.846835  298451 cri.go:89] found id: "060c04d69de0bc184bc8f947999dbdc731a26bde67d27b5ccc7d12c5160d6872"
	I1016 18:35:46.846841  298451 cri.go:89] found id: "b9c25f79f72e12553a80f8e56a83533f0c92695295a4c2fefe60d0d43ea83f8c"
	I1016 18:35:46.846845  298451 cri.go:89] found id: "014826c0f016dd10054a3e938e96ca2dc16e3da7c51ac716d64785bc10883c23"
	I1016 18:35:46.846848  298451 cri.go:89] found id: ""
	I1016 18:35:46.846912  298451 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:35:46.861667  298451 out.go:203] 
	W1016 18:35:46.864612  298451 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:35:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:35:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:35:46.864641  298451 out.go:285] * 
	* 
	W1016 18:35:46.871401  298451 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:35:46.874447  298451 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-303264 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1016 18:35:45.671138  290312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1016 18:35:45.675374  290312 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1016 18:35:45.675401  290312 kapi.go:107] duration metric: took 4.284139ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.295494ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-303264 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-303264 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [a669bfa4-4c67-4a84-bc0a-1e2f3a1f7310] Pending
helpers_test.go:352: "task-pv-pod" [a669bfa4-4c67-4a84-bc0a-1e2f3a1f7310] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [a669bfa4-4c67-4a84-bc0a-1e2f3a1f7310] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003308359s
addons_test.go:572: (dbg) Run:  kubectl --context addons-303264 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-303264 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-303264 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-303264 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-303264 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-303264 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-303264 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [8d7901dd-5507-4748-b3fd-c2020dcf17c1] Pending
helpers_test.go:352: "task-pv-pod-restore" [8d7901dd-5507-4748-b3fd-c2020dcf17c1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [8d7901dd-5507-4748-b3fd-c2020dcf17c1] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003548649s
addons_test.go:614: (dbg) Run:  kubectl --context addons-303264 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-303264 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-303264 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-303264 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-303264 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (323.717554ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:36:29.411206  299475 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:36:29.412572  299475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:36:29.412599  299475 out.go:374] Setting ErrFile to fd 2...
	I1016 18:36:29.412606  299475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:36:29.412972  299475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:36:29.416562  299475 mustload.go:65] Loading cluster: addons-303264
	I1016 18:36:29.417124  299475 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:36:29.417189  299475 addons.go:606] checking whether the cluster is paused
	I1016 18:36:29.417351  299475 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:36:29.417415  299475 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:36:29.417969  299475 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:36:29.445357  299475 ssh_runner.go:195] Run: systemctl --version
	I1016 18:36:29.445472  299475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:36:29.472417  299475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:36:29.583988  299475 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:36:29.584081  299475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:36:29.633207  299475 cri.go:89] found id: "4c854724ff606415b7b4cd338cfdcba4ae2ed9545913afdbe77836a55f682630"
	I1016 18:36:29.633228  299475 cri.go:89] found id: "72c450061ca944aebcf21ba44cd0fb5c6faba231d5c3510d405f852f8c576446"
	I1016 18:36:29.633233  299475 cri.go:89] found id: "d3c44cd5669c90a23e68ca072b42ce384a3f474528fe2c9af093fd29c7c3fa1b"
	I1016 18:36:29.633251  299475 cri.go:89] found id: "2fd75860dad3eccbd0d79a17732d30758bd9d2456a835178445c635cbb925a8a"
	I1016 18:36:29.633256  299475 cri.go:89] found id: "b85fa5b248e27a71c1f12a3be974d1bdda3b4469c81daef49b7cfde0ffea797c"
	I1016 18:36:29.633260  299475 cri.go:89] found id: "817135be1fb1204992d3db557da6db2ccace5f73a469e16e6ef4a8d3a6538646"
	I1016 18:36:29.633263  299475 cri.go:89] found id: "cc0546bd9d12ac9715ff397c9b06b4fc5d1b8028491ba478a088e6e88b40010f"
	I1016 18:36:29.633267  299475 cri.go:89] found id: "83e350274adee6aabe6699937b3ee1da677b23930fb3f6a320244186014dc182"
	I1016 18:36:29.633271  299475 cri.go:89] found id: "4d4a9d8e6117902f1f0822f15f29b21a249dfee058117ef45732ff0ebbc9b63c"
	I1016 18:36:29.633283  299475 cri.go:89] found id: "54a940e28a47407c8dd3c7ff37cedcc6661f35e7010edab0a32f554dcebca95e"
	I1016 18:36:29.633295  299475 cri.go:89] found id: "ddb9eebdec6b1a8e687257395e11e928406b35550fba6ed6e91af596e7585f32"
	I1016 18:36:29.633299  299475 cri.go:89] found id: "42b57482939e2fd5f76685af64bbdfb293bceb35482b2bdc733c1573a63ac270"
	I1016 18:36:29.633302  299475 cri.go:89] found id: "a1df688b216b826cd54cb112e3dad71b1e97ae8c966ef26ed5c8ef3dd4b29aaa"
	I1016 18:36:29.633305  299475 cri.go:89] found id: "8049d0179c2ce30d32ea7f0beab524406581715f6d4f201e8e1f342170d48791"
	I1016 18:36:29.633308  299475 cri.go:89] found id: "2f9a34f263e49dc31cf9dc01ff9a56ba8c02307a08be02085e5ebc86366593ef"
	I1016 18:36:29.633314  299475 cri.go:89] found id: "a11803eed98f15ecf4cde77e7c2e9a9c4a51e24bf968cd172db10b9cb9173b34"
	I1016 18:36:29.633328  299475 cri.go:89] found id: "2150dbabd80c70b27e2ffa366b6a76822ac0da6532eef17cae4daccd51271b0b"
	I1016 18:36:29.633334  299475 cri.go:89] found id: "a43557a0c460383dd11dbc546a8b05c541e5a54ece4dec48717534f0976d5b55"
	I1016 18:36:29.633337  299475 cri.go:89] found id: "3478855350e27312631cd476f6eb2db3e964996f54f9f6f384b530804abbc3ad"
	I1016 18:36:29.633340  299475 cri.go:89] found id: "2f7b424d8bee40bd1f116496f34f26e561c275a27e0ae071483edcb822d76d67"
	I1016 18:36:29.633345  299475 cri.go:89] found id: "060c04d69de0bc184bc8f947999dbdc731a26bde67d27b5ccc7d12c5160d6872"
	I1016 18:36:29.633354  299475 cri.go:89] found id: "b9c25f79f72e12553a80f8e56a83533f0c92695295a4c2fefe60d0d43ea83f8c"
	I1016 18:36:29.633358  299475 cri.go:89] found id: "014826c0f016dd10054a3e938e96ca2dc16e3da7c51ac716d64785bc10883c23"
	I1016 18:36:29.633360  299475 cri.go:89] found id: ""
	I1016 18:36:29.633423  299475 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:36:29.652888  299475 out.go:203] 
	W1016 18:36:29.658504  299475 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:36:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:36:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:36:29.658545  299475 out.go:285] * 
	* 
	W1016 18:36:29.665185  299475 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:36:29.669233  299475 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-303264 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-303264 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-303264 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (273.742368ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:36:29.732453  299520 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:36:29.733328  299520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:36:29.733373  299520 out.go:374] Setting ErrFile to fd 2...
	I1016 18:36:29.733395  299520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:36:29.733684  299520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:36:29.734024  299520 mustload.go:65] Loading cluster: addons-303264
	I1016 18:36:29.734424  299520 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:36:29.734470  299520 addons.go:606] checking whether the cluster is paused
	I1016 18:36:29.734600  299520 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:36:29.734645  299520 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:36:29.735166  299520 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:36:29.753995  299520 ssh_runner.go:195] Run: systemctl --version
	I1016 18:36:29.754087  299520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:36:29.775212  299520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:36:29.885104  299520 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:36:29.885214  299520 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:36:29.913758  299520 cri.go:89] found id: "4c854724ff606415b7b4cd338cfdcba4ae2ed9545913afdbe77836a55f682630"
	I1016 18:36:29.913788  299520 cri.go:89] found id: "72c450061ca944aebcf21ba44cd0fb5c6faba231d5c3510d405f852f8c576446"
	I1016 18:36:29.913793  299520 cri.go:89] found id: "d3c44cd5669c90a23e68ca072b42ce384a3f474528fe2c9af093fd29c7c3fa1b"
	I1016 18:36:29.913797  299520 cri.go:89] found id: "2fd75860dad3eccbd0d79a17732d30758bd9d2456a835178445c635cbb925a8a"
	I1016 18:36:29.913800  299520 cri.go:89] found id: "b85fa5b248e27a71c1f12a3be974d1bdda3b4469c81daef49b7cfde0ffea797c"
	I1016 18:36:29.913805  299520 cri.go:89] found id: "817135be1fb1204992d3db557da6db2ccace5f73a469e16e6ef4a8d3a6538646"
	I1016 18:36:29.913808  299520 cri.go:89] found id: "cc0546bd9d12ac9715ff397c9b06b4fc5d1b8028491ba478a088e6e88b40010f"
	I1016 18:36:29.913811  299520 cri.go:89] found id: "83e350274adee6aabe6699937b3ee1da677b23930fb3f6a320244186014dc182"
	I1016 18:36:29.913814  299520 cri.go:89] found id: "4d4a9d8e6117902f1f0822f15f29b21a249dfee058117ef45732ff0ebbc9b63c"
	I1016 18:36:29.913823  299520 cri.go:89] found id: "54a940e28a47407c8dd3c7ff37cedcc6661f35e7010edab0a32f554dcebca95e"
	I1016 18:36:29.913826  299520 cri.go:89] found id: "ddb9eebdec6b1a8e687257395e11e928406b35550fba6ed6e91af596e7585f32"
	I1016 18:36:29.913830  299520 cri.go:89] found id: "42b57482939e2fd5f76685af64bbdfb293bceb35482b2bdc733c1573a63ac270"
	I1016 18:36:29.913833  299520 cri.go:89] found id: "a1df688b216b826cd54cb112e3dad71b1e97ae8c966ef26ed5c8ef3dd4b29aaa"
	I1016 18:36:29.913836  299520 cri.go:89] found id: "8049d0179c2ce30d32ea7f0beab524406581715f6d4f201e8e1f342170d48791"
	I1016 18:36:29.913839  299520 cri.go:89] found id: "2f9a34f263e49dc31cf9dc01ff9a56ba8c02307a08be02085e5ebc86366593ef"
	I1016 18:36:29.913848  299520 cri.go:89] found id: "a11803eed98f15ecf4cde77e7c2e9a9c4a51e24bf968cd172db10b9cb9173b34"
	I1016 18:36:29.913855  299520 cri.go:89] found id: "2150dbabd80c70b27e2ffa366b6a76822ac0da6532eef17cae4daccd51271b0b"
	I1016 18:36:29.913859  299520 cri.go:89] found id: "a43557a0c460383dd11dbc546a8b05c541e5a54ece4dec48717534f0976d5b55"
	I1016 18:36:29.913863  299520 cri.go:89] found id: "3478855350e27312631cd476f6eb2db3e964996f54f9f6f384b530804abbc3ad"
	I1016 18:36:29.913866  299520 cri.go:89] found id: "2f7b424d8bee40bd1f116496f34f26e561c275a27e0ae071483edcb822d76d67"
	I1016 18:36:29.913870  299520 cri.go:89] found id: "060c04d69de0bc184bc8f947999dbdc731a26bde67d27b5ccc7d12c5160d6872"
	I1016 18:36:29.913876  299520 cri.go:89] found id: "b9c25f79f72e12553a80f8e56a83533f0c92695295a4c2fefe60d0d43ea83f8c"
	I1016 18:36:29.913879  299520 cri.go:89] found id: "014826c0f016dd10054a3e938e96ca2dc16e3da7c51ac716d64785bc10883c23"
	I1016 18:36:29.913882  299520 cri.go:89] found id: ""
	I1016 18:36:29.913941  299520 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:36:29.929849  299520 out.go:203] 
	W1016 18:36:29.933165  299520 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:36:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:36:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:36:29.933187  299520 out.go:285] * 
	* 
	W1016 18:36:29.939505  299520 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:36:29.942706  299520 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-303264 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (44.28s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-303264 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-303264 --alsologtostderr -v=1: exit status 11 (308.195387ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:35:19.198631  297259 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:35:19.199402  297259 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:35:19.199413  297259 out.go:374] Setting ErrFile to fd 2...
	I1016 18:35:19.199419  297259 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:35:19.199690  297259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:35:19.200023  297259 mustload.go:65] Loading cluster: addons-303264
	I1016 18:35:19.200450  297259 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:35:19.200477  297259 addons.go:606] checking whether the cluster is paused
	I1016 18:35:19.200603  297259 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:35:19.200634  297259 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:35:19.201268  297259 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:35:19.224498  297259 ssh_runner.go:195] Run: systemctl --version
	I1016 18:35:19.224560  297259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:35:19.246376  297259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:35:19.364440  297259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:35:19.364539  297259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:35:19.416188  297259 cri.go:89] found id: "4c854724ff606415b7b4cd338cfdcba4ae2ed9545913afdbe77836a55f682630"
	I1016 18:35:19.416216  297259 cri.go:89] found id: "72c450061ca944aebcf21ba44cd0fb5c6faba231d5c3510d405f852f8c576446"
	I1016 18:35:19.416222  297259 cri.go:89] found id: "d3c44cd5669c90a23e68ca072b42ce384a3f474528fe2c9af093fd29c7c3fa1b"
	I1016 18:35:19.416225  297259 cri.go:89] found id: "2fd75860dad3eccbd0d79a17732d30758bd9d2456a835178445c635cbb925a8a"
	I1016 18:35:19.416228  297259 cri.go:89] found id: "b85fa5b248e27a71c1f12a3be974d1bdda3b4469c81daef49b7cfde0ffea797c"
	I1016 18:35:19.416232  297259 cri.go:89] found id: "817135be1fb1204992d3db557da6db2ccace5f73a469e16e6ef4a8d3a6538646"
	I1016 18:35:19.416236  297259 cri.go:89] found id: "cc0546bd9d12ac9715ff397c9b06b4fc5d1b8028491ba478a088e6e88b40010f"
	I1016 18:35:19.416239  297259 cri.go:89] found id: "83e350274adee6aabe6699937b3ee1da677b23930fb3f6a320244186014dc182"
	I1016 18:35:19.416243  297259 cri.go:89] found id: "4d4a9d8e6117902f1f0822f15f29b21a249dfee058117ef45732ff0ebbc9b63c"
	I1016 18:35:19.416249  297259 cri.go:89] found id: "54a940e28a47407c8dd3c7ff37cedcc6661f35e7010edab0a32f554dcebca95e"
	I1016 18:35:19.416252  297259 cri.go:89] found id: "ddb9eebdec6b1a8e687257395e11e928406b35550fba6ed6e91af596e7585f32"
	I1016 18:35:19.416255  297259 cri.go:89] found id: "42b57482939e2fd5f76685af64bbdfb293bceb35482b2bdc733c1573a63ac270"
	I1016 18:35:19.416258  297259 cri.go:89] found id: "a1df688b216b826cd54cb112e3dad71b1e97ae8c966ef26ed5c8ef3dd4b29aaa"
	I1016 18:35:19.416261  297259 cri.go:89] found id: "8049d0179c2ce30d32ea7f0beab524406581715f6d4f201e8e1f342170d48791"
	I1016 18:35:19.416264  297259 cri.go:89] found id: "2f9a34f263e49dc31cf9dc01ff9a56ba8c02307a08be02085e5ebc86366593ef"
	I1016 18:35:19.416269  297259 cri.go:89] found id: "a11803eed98f15ecf4cde77e7c2e9a9c4a51e24bf968cd172db10b9cb9173b34"
	I1016 18:35:19.416277  297259 cri.go:89] found id: "2150dbabd80c70b27e2ffa366b6a76822ac0da6532eef17cae4daccd51271b0b"
	I1016 18:35:19.416286  297259 cri.go:89] found id: "a43557a0c460383dd11dbc546a8b05c541e5a54ece4dec48717534f0976d5b55"
	I1016 18:35:19.416289  297259 cri.go:89] found id: "3478855350e27312631cd476f6eb2db3e964996f54f9f6f384b530804abbc3ad"
	I1016 18:35:19.416292  297259 cri.go:89] found id: "2f7b424d8bee40bd1f116496f34f26e561c275a27e0ae071483edcb822d76d67"
	I1016 18:35:19.416296  297259 cri.go:89] found id: "060c04d69de0bc184bc8f947999dbdc731a26bde67d27b5ccc7d12c5160d6872"
	I1016 18:35:19.416299  297259 cri.go:89] found id: "b9c25f79f72e12553a80f8e56a83533f0c92695295a4c2fefe60d0d43ea83f8c"
	I1016 18:35:19.416303  297259 cri.go:89] found id: "014826c0f016dd10054a3e938e96ca2dc16e3da7c51ac716d64785bc10883c23"
	I1016 18:35:19.416306  297259 cri.go:89] found id: ""
	I1016 18:35:19.416355  297259 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:35:19.431681  297259 out.go:203] 
	W1016 18:35:19.434544  297259 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:35:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:35:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:35:19.434569  297259 out.go:285] * 
	* 
	W1016 18:35:19.440863  297259 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:35:19.446696  297259 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-303264 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-303264
helpers_test.go:243: (dbg) docker inspect addons-303264:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "039913fab7ea195304d0f4d96a7903eec2564008b2f73d8d1f43f3b9fb98e1c2",
	        "Created": "2025-10-16T18:32:33.499079971Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 291461,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:32:33.562059524Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/039913fab7ea195304d0f4d96a7903eec2564008b2f73d8d1f43f3b9fb98e1c2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/039913fab7ea195304d0f4d96a7903eec2564008b2f73d8d1f43f3b9fb98e1c2/hostname",
	        "HostsPath": "/var/lib/docker/containers/039913fab7ea195304d0f4d96a7903eec2564008b2f73d8d1f43f3b9fb98e1c2/hosts",
	        "LogPath": "/var/lib/docker/containers/039913fab7ea195304d0f4d96a7903eec2564008b2f73d8d1f43f3b9fb98e1c2/039913fab7ea195304d0f4d96a7903eec2564008b2f73d8d1f43f3b9fb98e1c2-json.log",
	        "Name": "/addons-303264",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-303264:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-303264",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "039913fab7ea195304d0f4d96a7903eec2564008b2f73d8d1f43f3b9fb98e1c2",
	                "LowerDir": "/var/lib/docker/overlay2/22ef939eac9adf032f7853ad51904cd074603f8031166df8aba3d379e341185a-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/22ef939eac9adf032f7853ad51904cd074603f8031166df8aba3d379e341185a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/22ef939eac9adf032f7853ad51904cd074603f8031166df8aba3d379e341185a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/22ef939eac9adf032f7853ad51904cd074603f8031166df8aba3d379e341185a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-303264",
	                "Source": "/var/lib/docker/volumes/addons-303264/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-303264",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-303264",
	                "name.minikube.sigs.k8s.io": "addons-303264",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c54e14845e91c72ca57667c43e9fd0d59019c21020f58b094484a2f938f1b6c",
	            "SandboxKey": "/var/run/docker/netns/7c54e14845e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-303264": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:70:8d:0d:7f:0a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "04734e091327ec9aae219b7bde6e1d789d28fdf9ff7c1da6401fcd4384794ccf",
	                    "EndpointID": "d5f74218c0ad7ce46275d3ebcc63a8482848f89ab47a52b221be6c8aa3b4559d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-303264",
	                        "039913fab7ea"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-303264 -n addons-303264
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-303264 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-303264 logs -n 25: (1.488884098s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-933367 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-933367   │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ delete  │ -p download-only-933367                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-933367   │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ start   │ -o=json --download-only -p download-only-932213 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-932213   │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Oct 25 18:32 UTC │ 16 Oct 25 18:32 UTC │
	│ delete  │ -p download-only-932213                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-932213   │ jenkins │ v1.37.0 │ 16 Oct 25 18:32 UTC │ 16 Oct 25 18:32 UTC │
	│ delete  │ -p download-only-933367                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-933367   │ jenkins │ v1.37.0 │ 16 Oct 25 18:32 UTC │ 16 Oct 25 18:32 UTC │
	│ delete  │ -p download-only-932213                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-932213   │ jenkins │ v1.37.0 │ 16 Oct 25 18:32 UTC │ 16 Oct 25 18:32 UTC │
	│ start   │ --download-only -p download-docker-790969 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-790969 │ jenkins │ v1.37.0 │ 16 Oct 25 18:32 UTC │                     │
	│ delete  │ -p download-docker-790969                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-790969 │ jenkins │ v1.37.0 │ 16 Oct 25 18:32 UTC │ 16 Oct 25 18:32 UTC │
	│ start   │ --download-only -p binary-mirror-086561 --alsologtostderr --binary-mirror http://127.0.0.1:41065 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-086561   │ jenkins │ v1.37.0 │ 16 Oct 25 18:32 UTC │                     │
	│ delete  │ -p binary-mirror-086561                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-086561   │ jenkins │ v1.37.0 │ 16 Oct 25 18:32 UTC │ 16 Oct 25 18:32 UTC │
	│ addons  │ disable dashboard -p addons-303264                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:32 UTC │                     │
	│ addons  │ enable dashboard -p addons-303264                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:32 UTC │                     │
	│ start   │ -p addons-303264 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:32 UTC │ 16 Oct 25 18:35 UTC │
	│ addons  │ addons-303264 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:35 UTC │                     │
	│ addons  │ addons-303264 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:35 UTC │                     │
	│ addons  │ enable headlamp -p addons-303264 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-303264          │ jenkins │ v1.37.0 │ 16 Oct 25 18:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:32:07
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:32:07.582538  291068 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:32:07.582653  291068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:32:07.582664  291068 out.go:374] Setting ErrFile to fd 2...
	I1016 18:32:07.582670  291068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:32:07.582909  291068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:32:07.583332  291068 out.go:368] Setting JSON to false
	I1016 18:32:07.584183  291068 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4457,"bootTime":1760635071,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 18:32:07.584251  291068 start.go:141] virtualization:  
	I1016 18:32:07.586034  291068 out.go:179] * [addons-303264] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 18:32:07.587512  291068 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:32:07.587594  291068 notify.go:220] Checking for updates...
	I1016 18:32:07.590401  291068 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:32:07.592235  291068 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:32:07.593405  291068 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 18:32:07.594977  291068 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 18:32:07.596122  291068 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:32:07.597540  291068 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:32:07.618548  291068 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 18:32:07.618678  291068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:32:07.685084  291068 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-16 18:32:07.674852465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:32:07.685227  291068 docker.go:318] overlay module found
	I1016 18:32:07.686634  291068 out.go:179] * Using the docker driver based on user configuration
	I1016 18:32:07.687769  291068 start.go:305] selected driver: docker
	I1016 18:32:07.687795  291068 start.go:925] validating driver "docker" against <nil>
	I1016 18:32:07.687819  291068 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:32:07.688555  291068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:32:07.749277  291068 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-16 18:32:07.739672497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:32:07.749457  291068 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 18:32:07.749716  291068 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:32:07.751040  291068 out.go:179] * Using Docker driver with root privileges
	I1016 18:32:07.752148  291068 cni.go:84] Creating CNI manager for ""
	I1016 18:32:07.752209  291068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:32:07.752222  291068 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1016 18:32:07.752297  291068 start.go:349] cluster config:
	{Name:addons-303264 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-303264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1016 18:32:07.753728  291068 out.go:179] * Starting "addons-303264" primary control-plane node in "addons-303264" cluster
	I1016 18:32:07.755041  291068 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:32:07.756241  291068 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:32:07.757316  291068 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:32:07.757388  291068 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 18:32:07.757401  291068 cache.go:58] Caching tarball of preloaded images
	I1016 18:32:07.757493  291068 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 18:32:07.757507  291068 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:32:07.757836  291068 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/config.json ...
	I1016 18:32:07.757861  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/config.json: {Name:mk3a4acacad842b0d0bcf0e299ebde6b8b609acc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:07.758033  291068 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:32:07.773810  291068 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 to local cache
	I1016 18:32:07.773941  291068 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory
	I1016 18:32:07.773966  291068 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory, skipping pull
	I1016 18:32:07.773972  291068 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in cache, skipping pull
	I1016 18:32:07.773982  291068 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 as a tarball
	I1016 18:32:07.773988  291068 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 from local cache
	I1016 18:32:25.642782  291068 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 from cached tarball
	I1016 18:32:25.642826  291068 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:32:25.642871  291068 start.go:360] acquireMachinesLock for addons-303264: {Name:mke9093fccea664c8560b0ff83054243f330ac14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:32:25.642997  291068 start.go:364] duration metric: took 101.061µs to acquireMachinesLock for "addons-303264"
	I1016 18:32:25.643028  291068 start.go:93] Provisioning new machine with config: &{Name:addons-303264 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-303264 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:32:25.643111  291068 start.go:125] createHost starting for "" (driver="docker")
	I1016 18:32:25.646639  291068 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1016 18:32:25.646903  291068 start.go:159] libmachine.API.Create for "addons-303264" (driver="docker")
	I1016 18:32:25.646950  291068 client.go:168] LocalClient.Create starting
	I1016 18:32:25.647091  291068 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem
	I1016 18:32:25.761415  291068 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem
	I1016 18:32:26.574839  291068 cli_runner.go:164] Run: docker network inspect addons-303264 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1016 18:32:26.591381  291068 cli_runner.go:211] docker network inspect addons-303264 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1016 18:32:26.591501  291068 network_create.go:284] running [docker network inspect addons-303264] to gather additional debugging logs...
	I1016 18:32:26.591526  291068 cli_runner.go:164] Run: docker network inspect addons-303264
	W1016 18:32:26.608664  291068 cli_runner.go:211] docker network inspect addons-303264 returned with exit code 1
	I1016 18:32:26.608699  291068 network_create.go:287] error running [docker network inspect addons-303264]: docker network inspect addons-303264: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-303264 not found
	I1016 18:32:26.608713  291068 network_create.go:289] output of [docker network inspect addons-303264]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-303264 not found
	
	** /stderr **
	I1016 18:32:26.608844  291068 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:32:26.625623  291068 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c3990}
	I1016 18:32:26.625664  291068 network_create.go:124] attempt to create docker network addons-303264 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1016 18:32:26.625719  291068 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-303264 addons-303264
	I1016 18:32:26.690858  291068 network_create.go:108] docker network addons-303264 192.168.49.0/24 created
	I1016 18:32:26.690892  291068 kic.go:121] calculated static IP "192.168.49.2" for the "addons-303264" container
	I1016 18:32:26.690982  291068 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1016 18:32:26.705311  291068 cli_runner.go:164] Run: docker volume create addons-303264 --label name.minikube.sigs.k8s.io=addons-303264 --label created_by.minikube.sigs.k8s.io=true
	I1016 18:32:26.727396  291068 oci.go:103] Successfully created a docker volume addons-303264
	I1016 18:32:26.727495  291068 cli_runner.go:164] Run: docker run --rm --name addons-303264-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-303264 --entrypoint /usr/bin/test -v addons-303264:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1016 18:32:28.937233  291068 cli_runner.go:217] Completed: docker run --rm --name addons-303264-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-303264 --entrypoint /usr/bin/test -v addons-303264:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib: (2.209674317s)
	I1016 18:32:28.937262  291068 oci.go:107] Successfully prepared a docker volume addons-303264
	I1016 18:32:28.937304  291068 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:32:28.937324  291068 kic.go:194] Starting extracting preloaded images to volume ...
	I1016 18:32:28.937385  291068 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-303264:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1016 18:32:33.430703  291068 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-303264:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.493280041s)
	I1016 18:32:33.430734  291068 kic.go:203] duration metric: took 4.493406833s to extract preloaded images to volume ...
	W1016 18:32:33.430886  291068 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1016 18:32:33.431023  291068 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1016 18:32:33.484116  291068 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-303264 --name addons-303264 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-303264 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-303264 --network addons-303264 --ip 192.168.49.2 --volume addons-303264:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1016 18:32:33.781620  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Running}}
	I1016 18:32:33.800381  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:32:33.821222  291068 cli_runner.go:164] Run: docker exec addons-303264 stat /var/lib/dpkg/alternatives/iptables
	I1016 18:32:33.875572  291068 oci.go:144] the created container "addons-303264" has a running status.
	I1016 18:32:33.875607  291068 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa...
	I1016 18:32:34.235581  291068 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1016 18:32:34.258763  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:32:34.283071  291068 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1016 18:32:34.283102  291068 kic_runner.go:114] Args: [docker exec --privileged addons-303264 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1016 18:32:34.325008  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:32:34.342333  291068 machine.go:93] provisionDockerMachine start ...
	I1016 18:32:34.342423  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:32:34.359445  291068 main.go:141] libmachine: Using SSH client type: native
	I1016 18:32:34.359772  291068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1016 18:32:34.359783  291068 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:32:34.360451  291068 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 18:32:37.513239  291068 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-303264
	
	I1016 18:32:37.513266  291068 ubuntu.go:182] provisioning hostname "addons-303264"
	I1016 18:32:37.513332  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:32:37.531389  291068 main.go:141] libmachine: Using SSH client type: native
	I1016 18:32:37.531716  291068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1016 18:32:37.531734  291068 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-303264 && echo "addons-303264" | sudo tee /etc/hostname
	I1016 18:32:37.686897  291068 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-303264
	
	I1016 18:32:37.686975  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:32:37.704635  291068 main.go:141] libmachine: Using SSH client type: native
	I1016 18:32:37.704946  291068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1016 18:32:37.704966  291068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-303264' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-303264/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-303264' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:32:37.849396  291068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:32:37.849424  291068 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 18:32:37.849450  291068 ubuntu.go:190] setting up certificates
	I1016 18:32:37.849461  291068 provision.go:84] configureAuth start
	I1016 18:32:37.849523  291068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-303264
	I1016 18:32:37.866532  291068 provision.go:143] copyHostCerts
	I1016 18:32:37.866617  291068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 18:32:37.866759  291068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 18:32:37.866829  291068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 18:32:37.866892  291068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.addons-303264 san=[127.0.0.1 192.168.49.2 addons-303264 localhost minikube]
	I1016 18:32:38.098485  291068 provision.go:177] copyRemoteCerts
	I1016 18:32:38.098554  291068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:32:38.098598  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:32:38.117444  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:32:38.220910  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 18:32:38.238203  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1016 18:32:38.255933  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1016 18:32:38.273274  291068 provision.go:87] duration metric: took 423.786068ms to configureAuth
	I1016 18:32:38.273307  291068 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:32:38.273490  291068 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:32:38.273601  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:32:38.290252  291068 main.go:141] libmachine: Using SSH client type: native
	I1016 18:32:38.290565  291068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1016 18:32:38.290587  291068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:32:38.539578  291068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:32:38.539602  291068 machine.go:96] duration metric: took 4.197249119s to provisionDockerMachine
	I1016 18:32:38.539611  291068 client.go:171] duration metric: took 12.892649605s to LocalClient.Create
	I1016 18:32:38.539641  291068 start.go:167] duration metric: took 12.892723672s to libmachine.API.Create "addons-303264"
	I1016 18:32:38.539657  291068 start.go:293] postStartSetup for "addons-303264" (driver="docker")
	I1016 18:32:38.539668  291068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:32:38.539759  291068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:32:38.539805  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:32:38.556878  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:32:38.662895  291068 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:32:38.666583  291068 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:32:38.666614  291068 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:32:38.666626  291068 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 18:32:38.666715  291068 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 18:32:38.666777  291068 start.go:296] duration metric: took 127.112308ms for postStartSetup
	I1016 18:32:38.667120  291068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-303264
	I1016 18:32:38.684876  291068 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/config.json ...
	I1016 18:32:38.685273  291068 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:32:38.685337  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:32:38.702141  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:32:38.802436  291068 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:32:38.807070  291068 start.go:128] duration metric: took 13.163942831s to createHost
	I1016 18:32:38.807094  291068 start.go:83] releasing machines lock for "addons-303264", held for 13.164083637s
	I1016 18:32:38.807162  291068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-303264
	I1016 18:32:38.824464  291068 ssh_runner.go:195] Run: cat /version.json
	I1016 18:32:38.824488  291068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:32:38.824520  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:32:38.824560  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:32:38.844450  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:32:38.852336  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:32:39.034588  291068 ssh_runner.go:195] Run: systemctl --version
	I1016 18:32:39.040950  291068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:32:39.077434  291068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:32:39.081856  291068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:32:39.081960  291068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:32:39.110700  291068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1016 18:32:39.110733  291068 start.go:495] detecting cgroup driver to use...
	I1016 18:32:39.110766  291068 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 18:32:39.110832  291068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:32:39.127569  291068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:32:39.140376  291068 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:32:39.140438  291068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:32:39.158736  291068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:32:39.177990  291068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:32:39.291248  291068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:32:39.421189  291068 docker.go:234] disabling docker service ...
	I1016 18:32:39.421311  291068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:32:39.443225  291068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:32:39.456656  291068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:32:39.579336  291068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:32:39.694063  291068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:32:39.706857  291068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:32:39.720589  291068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:32:39.720672  291068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:32:39.729216  291068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 18:32:39.729291  291068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:32:39.737831  291068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:32:39.746136  291068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:32:39.754949  291068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:32:39.762817  291068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:32:39.771226  291068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:32:39.784128  291068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:32:39.793717  291068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:32:39.801111  291068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:32:39.808325  291068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:32:39.918821  291068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:32:40.047179  291068 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:32:40.047328  291068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:32:40.052953  291068 start.go:563] Will wait 60s for crictl version
	I1016 18:32:40.053085  291068 ssh_runner.go:195] Run: which crictl
	I1016 18:32:40.059549  291068 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:32:40.096586  291068 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:32:40.096758  291068 ssh_runner.go:195] Run: crio --version
	I1016 18:32:40.130836  291068 ssh_runner.go:195] Run: crio --version
	I1016 18:32:40.165035  291068 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:32:40.167888  291068 cli_runner.go:164] Run: docker network inspect addons-303264 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:32:40.184755  291068 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1016 18:32:40.188982  291068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:32:40.199779  291068 kubeadm.go:883] updating cluster {Name:addons-303264 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-303264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:32:40.199900  291068 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:32:40.199963  291068 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:32:40.236053  291068 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:32:40.236077  291068 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:32:40.236133  291068 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:32:40.263459  291068 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:32:40.263484  291068 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:32:40.263492  291068 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1016 18:32:40.263580  291068 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-303264 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-303264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:32:40.263672  291068 ssh_runner.go:195] Run: crio config
	I1016 18:32:40.336157  291068 cni.go:84] Creating CNI manager for ""
	I1016 18:32:40.336191  291068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:32:40.336213  291068 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:32:40.336261  291068 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-303264 NodeName:addons-303264 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:32:40.336439  291068 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-303264"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:32:40.336528  291068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:32:40.344661  291068 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:32:40.344753  291068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 18:32:40.352884  291068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1016 18:32:40.366752  291068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:32:40.379513  291068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1016 18:32:40.392958  291068 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1016 18:32:40.396706  291068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:32:40.405979  291068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:32:40.516390  291068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:32:40.533353  291068 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264 for IP: 192.168.49.2
	I1016 18:32:40.533378  291068 certs.go:195] generating shared ca certs ...
	I1016 18:32:40.533394  291068 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:40.533599  291068 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 18:32:40.877674  291068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt ...
	I1016 18:32:40.877705  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt: {Name:mk27fd733cad0eb66b2f3a98a14dd84398d1eaa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:40.877933  291068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key ...
	I1016 18:32:40.877950  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key: {Name:mkc1ec8ff0d3175e6851ad88a1f8aae31f527492 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:40.878047  291068 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 18:32:41.426674  291068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt ...
	I1016 18:32:41.426703  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt: {Name:mk6439ddace249e2586a7fd1718c7a829265fdab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:41.426892  291068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key ...
	I1016 18:32:41.426906  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key: {Name:mk33f0a1918158d348ed027ab4286c18ae5c709e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:41.426996  291068 certs.go:257] generating profile certs ...
	I1016 18:32:41.427057  291068 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.key
	I1016 18:32:41.427078  291068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt with IP's: []
	I1016 18:32:42.000589  291068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt ...
	I1016 18:32:42.000619  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: {Name:mk6b93ac1ce658048e7994efc7ba4a2cc77453a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:42.000812  291068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.key ...
	I1016 18:32:42.000825  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.key: {Name:mk594099792513e66d42a18369006a4332135bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:42.000912  291068 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.key.249fcb45
	I1016 18:32:42.000933  291068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.crt.249fcb45 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1016 18:32:42.507832  291068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.crt.249fcb45 ...
	I1016 18:32:42.507863  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.crt.249fcb45: {Name:mk285f0e178bcbdb668019dc814db28d26e6406f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:42.508047  291068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.key.249fcb45 ...
	I1016 18:32:42.508064  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.key.249fcb45: {Name:mkef5dc30d88340b24c58b0f1aa5ee11d71308cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:42.508154  291068 certs.go:382] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.crt.249fcb45 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.crt
	I1016 18:32:42.508241  291068 certs.go:386] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.key.249fcb45 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.key
	I1016 18:32:42.508303  291068 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/proxy-client.key
	I1016 18:32:42.508324  291068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/proxy-client.crt with IP's: []
	I1016 18:32:42.618470  291068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/proxy-client.crt ...
	I1016 18:32:42.618503  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/proxy-client.crt: {Name:mk928b3858ec1e54cb9bb0aabd6ebc3dd71a4ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:42.619415  291068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/proxy-client.key ...
	I1016 18:32:42.619452  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/proxy-client.key: {Name:mkef6e3a58a143aecde32f301b1971211247a1b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:32:42.619709  291068 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 18:32:42.619773  291068 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 18:32:42.619807  291068 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:32:42.619852  291068 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 18:32:42.620539  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:32:42.639641  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 18:32:42.658250  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:32:42.676725  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 18:32:42.694443  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1016 18:32:42.711833  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1016 18:32:42.729467  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:32:42.747068  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1016 18:32:42.765831  291068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:32:42.783340  291068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:32:42.795781  291068 ssh_runner.go:195] Run: openssl version
	I1016 18:32:42.802038  291068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:32:42.810455  291068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:32:42.814367  291068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:32:42.814455  291068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:32:42.855115  291068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:32:42.863460  291068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:32:42.867901  291068 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1016 18:32:42.867948  291068 kubeadm.go:400] StartCluster: {Name:addons-303264 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-303264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:32:42.868022  291068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:32:42.868086  291068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:32:42.893536  291068 cri.go:89] found id: ""
	I1016 18:32:42.893607  291068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:32:42.901212  291068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 18:32:42.910407  291068 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1016 18:32:42.910527  291068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 18:32:42.921291  291068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1016 18:32:42.921351  291068 kubeadm.go:157] found existing configuration files:
	
	I1016 18:32:42.921422  291068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1016 18:32:42.931247  291068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1016 18:32:42.931363  291068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1016 18:32:42.939160  291068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1016 18:32:42.947698  291068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1016 18:32:42.947807  291068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 18:32:42.955556  291068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1016 18:32:42.964037  291068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1016 18:32:42.964148  291068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1016 18:32:42.972831  291068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1016 18:32:42.980282  291068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1016 18:32:42.980369  291068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 18:32:42.987590  291068 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1016 18:32:43.025419  291068 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1016 18:32:43.025792  291068 kubeadm.go:318] [preflight] Running pre-flight checks
	I1016 18:32:43.050704  291068 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1016 18:32:43.050843  291068 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1016 18:32:43.050904  291068 kubeadm.go:318] OS: Linux
	I1016 18:32:43.050981  291068 kubeadm.go:318] CGROUPS_CPU: enabled
	I1016 18:32:43.051060  291068 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1016 18:32:43.051142  291068 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1016 18:32:43.051218  291068 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1016 18:32:43.051293  291068 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1016 18:32:43.051415  291068 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1016 18:32:43.051489  291068 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1016 18:32:43.051545  291068 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1016 18:32:43.051599  291068 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1016 18:32:43.121562  291068 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1016 18:32:43.121725  291068 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1016 18:32:43.121865  291068 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1016 18:32:43.133175  291068 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1016 18:32:43.139246  291068 out.go:252]   - Generating certificates and keys ...
	I1016 18:32:43.139422  291068 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 18:32:43.139525  291068 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 18:32:43.738804  291068 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1016 18:32:44.048480  291068 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1016 18:32:45.317059  291068 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1016 18:32:45.514933  291068 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1016 18:32:45.852310  291068 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1016 18:32:45.852694  291068 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-303264 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1016 18:32:46.522530  291068 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1016 18:32:46.522905  291068 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-303264 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1016 18:32:47.279454  291068 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 18:32:48.548178  291068 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 18:32:48.632350  291068 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 18:32:48.632657  291068 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 18:32:49.062712  291068 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 18:32:49.416352  291068 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 18:32:49.894072  291068 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 18:32:50.490452  291068 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 18:32:50.748527  291068 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 18:32:50.748999  291068 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 18:32:50.754054  291068 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 18:32:50.757602  291068 out.go:252]   - Booting up control plane ...
	I1016 18:32:50.757721  291068 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 18:32:50.757804  291068 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 18:32:50.757873  291068 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 18:32:50.772154  291068 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 18:32:50.772286  291068 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 18:32:50.780164  291068 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 18:32:50.780605  291068 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 18:32:50.780898  291068 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 18:32:50.904518  291068 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 18:32:50.904646  291068 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 18:32:52.406101  291068 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501623429s
	I1016 18:32:52.410513  291068 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 18:32:52.410650  291068 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1016 18:32:52.410783  291068 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 18:32:52.410955  291068 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 18:32:55.743933  291068 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.331528243s
	I1016 18:32:57.076407  291068 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.664450467s
	I1016 18:32:57.915051  291068 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.502856329s
	I1016 18:32:57.936813  291068 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 18:32:57.952915  291068 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 18:32:57.971798  291068 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 18:32:57.972048  291068 kubeadm.go:318] [mark-control-plane] Marking the node addons-303264 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 18:32:57.989598  291068 kubeadm.go:318] [bootstrap-token] Using token: jh1ftm.0q5a4qmrb00w77x3
	I1016 18:32:57.992938  291068 out.go:252]   - Configuring RBAC rules ...
	I1016 18:32:57.993067  291068 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 18:32:58.001491  291068 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 18:32:58.013474  291068 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 18:32:58.018187  291068 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 18:32:58.024445  291068 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 18:32:58.029919  291068 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 18:32:58.324815  291068 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 18:32:58.755227  291068 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 18:32:59.321778  291068 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 18:32:59.322887  291068 kubeadm.go:318] 
	I1016 18:32:59.322967  291068 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 18:32:59.322973  291068 kubeadm.go:318] 
	I1016 18:32:59.323055  291068 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 18:32:59.323060  291068 kubeadm.go:318] 
	I1016 18:32:59.323102  291068 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 18:32:59.323195  291068 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 18:32:59.323265  291068 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 18:32:59.323275  291068 kubeadm.go:318] 
	I1016 18:32:59.323336  291068 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 18:32:59.323341  291068 kubeadm.go:318] 
	I1016 18:32:59.323403  291068 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 18:32:59.323416  291068 kubeadm.go:318] 
	I1016 18:32:59.323482  291068 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 18:32:59.323569  291068 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 18:32:59.323645  291068 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 18:32:59.323654  291068 kubeadm.go:318] 
	I1016 18:32:59.323744  291068 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 18:32:59.323840  291068 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 18:32:59.323852  291068 kubeadm.go:318] 
	I1016 18:32:59.323951  291068 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token jh1ftm.0q5a4qmrb00w77x3 \
	I1016 18:32:59.324086  291068 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:85a5ed38e8d77f9704e1d1edb99d03efd3921606f6351dbe0cfb02fc48526847 \
	I1016 18:32:59.324116  291068 kubeadm.go:318] 	--control-plane 
	I1016 18:32:59.324124  291068 kubeadm.go:318] 
	I1016 18:32:59.324213  291068 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 18:32:59.324223  291068 kubeadm.go:318] 
	I1016 18:32:59.324318  291068 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token jh1ftm.0q5a4qmrb00w77x3 \
	I1016 18:32:59.324434  291068 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:85a5ed38e8d77f9704e1d1edb99d03efd3921606f6351dbe0cfb02fc48526847 
	I1016 18:32:59.327808  291068 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1016 18:32:59.328061  291068 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1016 18:32:59.328177  291068 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1016 18:32:59.328198  291068 cni.go:84] Creating CNI manager for ""
	I1016 18:32:59.328209  291068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:32:59.331527  291068 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 18:32:59.334618  291068 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 18:32:59.338949  291068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 18:32:59.338972  291068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 18:32:59.351864  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 18:32:59.613121  291068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 18:32:59.613304  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:32:59.613377  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-303264 minikube.k8s.io/updated_at=2025_10_16T18_32_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=addons-303264 minikube.k8s.io/primary=true
	I1016 18:32:59.802952  291068 ops.go:34] apiserver oom_adj: -16
	I1016 18:32:59.803052  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:33:00.305557  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:33:00.803186  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:33:01.304030  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:33:01.803192  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:33:02.303256  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:33:02.804162  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:33:03.304124  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:33:03.803759  291068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:33:03.921907  291068 kubeadm.go:1113] duration metric: took 4.308658288s to wait for elevateKubeSystemPrivileges
	I1016 18:33:03.921933  291068 kubeadm.go:402] duration metric: took 21.053986989s to StartCluster
	I1016 18:33:03.921950  291068 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:33:03.922063  291068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:33:03.922517  291068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:33:03.922730  291068 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:33:03.922876  291068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 18:33:03.923108  291068 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:33:03.923138  291068 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1016 18:33:03.923213  291068 addons.go:69] Setting yakd=true in profile "addons-303264"
	I1016 18:33:03.923231  291068 addons.go:238] Setting addon yakd=true in "addons-303264"
	I1016 18:33:03.923255  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:03.923704  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.924276  291068 addons.go:69] Setting metrics-server=true in profile "addons-303264"
	I1016 18:33:03.924298  291068 addons.go:238] Setting addon metrics-server=true in "addons-303264"
	I1016 18:33:03.924319  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:03.924730  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.924885  291068 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-303264"
	I1016 18:33:03.924900  291068 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-303264"
	I1016 18:33:03.924919  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:03.925356  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.927924  291068 addons.go:69] Setting registry=true in profile "addons-303264"
	I1016 18:33:03.927950  291068 addons.go:238] Setting addon registry=true in "addons-303264"
	I1016 18:33:03.927984  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:03.928415  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.928965  291068 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-303264"
	I1016 18:33:03.929043  291068 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-303264"
	I1016 18:33:03.930128  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:03.931462  291068 addons.go:69] Setting cloud-spanner=true in profile "addons-303264"
	I1016 18:33:03.931482  291068 addons.go:238] Setting addon cloud-spanner=true in "addons-303264"
	I1016 18:33:03.931504  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:03.931894  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.933033  291068 addons.go:69] Setting registry-creds=true in profile "addons-303264"
	I1016 18:33:03.933080  291068 addons.go:238] Setting addon registry-creds=true in "addons-303264"
	I1016 18:33:03.933239  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:03.933748  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.942796  291068 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-303264"
	I1016 18:33:03.942876  291068 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-303264"
	I1016 18:33:03.942912  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:03.943383  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.945725  291068 addons.go:69] Setting storage-provisioner=true in profile "addons-303264"
	I1016 18:33:03.945807  291068 addons.go:238] Setting addon storage-provisioner=true in "addons-303264"
	I1016 18:33:03.945884  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:03.946456  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.958323  291068 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-303264"
	I1016 18:33:03.958358  291068 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-303264"
	I1016 18:33:03.958707  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.958839  291068 addons.go:69] Setting default-storageclass=true in profile "addons-303264"
	I1016 18:33:03.958851  291068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-303264"
	I1016 18:33:03.959087  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.974610  291068 addons.go:69] Setting volcano=true in profile "addons-303264"
	I1016 18:33:03.974644  291068 addons.go:238] Setting addon volcano=true in "addons-303264"
	I1016 18:33:03.974681  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:03.975184  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:03.977216  291068 addons.go:69] Setting gcp-auth=true in profile "addons-303264"
	I1016 18:33:03.977252  291068 mustload.go:65] Loading cluster: addons-303264
	I1016 18:33:03.977526  291068 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:33:03.977794  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:04.007371  291068 addons.go:69] Setting volumesnapshots=true in profile "addons-303264"
	I1016 18:33:04.007415  291068 addons.go:238] Setting addon volumesnapshots=true in "addons-303264"
	I1016 18:33:04.007455  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:04.008177  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:04.011909  291068 addons.go:69] Setting ingress=true in profile "addons-303264"
	I1016 18:33:04.012000  291068 addons.go:238] Setting addon ingress=true in "addons-303264"
	I1016 18:33:04.012078  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:04.012730  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:04.025002  291068 addons.go:69] Setting ingress-dns=true in profile "addons-303264"
	I1016 18:33:04.025130  291068 addons.go:238] Setting addon ingress-dns=true in "addons-303264"
	I1016 18:33:04.025354  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:04.025746  291068 out.go:179] * Verifying Kubernetes components...
	I1016 18:33:04.138317  291068 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:33:04.141259  291068 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:33:04.141283  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:33:04.141354  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.026618  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:04.026628  291068 addons.go:69] Setting inspektor-gadget=true in profile "addons-303264"
	I1016 18:33:04.163024  291068 addons.go:238] Setting addon inspektor-gadget=true in "addons-303264"
	I1016 18:33:04.163096  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:04.163700  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:04.181819  291068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:33:04.182279  291068 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1016 18:33:04.185957  291068 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1016 18:33:04.185980  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1016 18:33:04.186044  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.186616  291068 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1016 18:33:04.212033  291068 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1016 18:33:04.215090  291068 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1016 18:33:04.215167  291068 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1016 18:33:04.215279  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	W1016 18:33:04.219821  291068 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1016 18:33:04.225290  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:04.062723  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:04.248734  291068 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1016 18:33:04.248863  291068 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1016 18:33:04.248905  291068 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1016 18:33:04.251599  291068 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1016 18:33:04.251621  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1016 18:33:04.251691  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.262233  291068 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-303264"
	I1016 18:33:04.262299  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:04.262704  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:04.273430  291068 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1016 18:33:04.277203  291068 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1016 18:33:04.277229  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1016 18:33:04.277294  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.296590  291068 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1016 18:33:04.296610  291068 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1016 18:33:04.296673  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.299156  291068 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1016 18:33:04.299176  291068 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1016 18:33:04.299239  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.321415  291068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1016 18:33:04.329275  291068 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1016 18:33:04.333407  291068 addons.go:238] Setting addon default-storageclass=true in "addons-303264"
	I1016 18:33:04.333453  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:04.333863  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:04.351599  291068 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1016 18:33:04.351781  291068 out.go:179]   - Using image docker.io/registry:3.0.0
	I1016 18:33:04.363644  291068 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1016 18:33:04.378876  291068 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1016 18:33:04.380159  291068 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1016 18:33:04.380179  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1016 18:33:04.380246  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.411026  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.414017  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.421777  291068 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1016 18:33:04.430458  291068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 18:33:04.433344  291068 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1016 18:33:04.442815  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1016 18:33:04.433369  291068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1016 18:33:04.451005  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.454290  291068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1016 18:33:04.454967  291068 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1016 18:33:04.455040  291068 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1016 18:33:04.455576  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.457395  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.461549  291068 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1016 18:33:04.461577  291068 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1016 18:33:04.461642  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.471182  291068 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1016 18:33:04.473306  291068 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1016 18:33:04.473330  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1016 18:33:04.473396  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.482874  291068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1016 18:33:04.492257  291068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1016 18:33:04.495258  291068 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1016 18:33:04.498098  291068 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1016 18:33:04.498125  291068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1016 18:33:04.498191  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.505485  291068 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1016 18:33:04.505506  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1016 18:33:04.505568  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.535998  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.552389  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.553329  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.555390  291068 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1016 18:33:04.560140  291068 out.go:179]   - Using image docker.io/busybox:stable
	I1016 18:33:04.565987  291068 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1016 18:33:04.566010  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1016 18:33:04.566081  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.570721  291068 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:33:04.570747  291068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:33:04.570814  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:04.641490  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.642663  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.678929  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.691610  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.697546  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.702743  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.705404  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.720507  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:04.736871  291068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:33:05.182602  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1016 18:33:05.246321  291068 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1016 18:33:05.246344  291068 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1016 18:33:05.274668  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1016 18:33:05.329612  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1016 18:33:05.335034  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:33:05.351938  291068 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1016 18:33:05.352010  291068 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1016 18:33:05.368128  291068 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1016 18:33:05.368204  291068 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1016 18:33:05.404766  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1016 18:33:05.436805  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1016 18:33:05.460273  291068 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1016 18:33:05.460344  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1016 18:33:05.462788  291068 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:05.462860  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1016 18:33:05.479300  291068 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1016 18:33:05.479376  291068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1016 18:33:05.484796  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1016 18:33:05.487825  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:33:05.525593  291068 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1016 18:33:05.525674  291068 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1016 18:33:05.556971  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1016 18:33:05.586493  291068 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1016 18:33:05.586571  291068 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1016 18:33:05.603805  291068 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1016 18:33:05.603890  291068 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1016 18:33:05.717222  291068 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1016 18:33:05.717294  291068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1016 18:33:05.731215  291068 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1016 18:33:05.731281  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1016 18:33:05.731561  291068 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1016 18:33:05.731598  291068 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1016 18:33:05.761711  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:05.826455  291068 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1016 18:33:05.826521  291068 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1016 18:33:05.835934  291068 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1016 18:33:05.836015  291068 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1016 18:33:05.879021  291068 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1016 18:33:05.879097  291068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1016 18:33:05.930812  291068 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1016 18:33:05.930875  291068 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1016 18:33:05.957818  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1016 18:33:05.968485  291068 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1016 18:33:05.968552  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1016 18:33:06.045663  291068 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1016 18:33:06.045741  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1016 18:33:06.105259  291068 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1016 18:33:06.105529  291068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1016 18:33:06.140580  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1016 18:33:06.167566  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1016 18:33:06.175336  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1016 18:33:06.256206  291068 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1016 18:33:06.256278  291068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1016 18:33:06.511012  291068 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1016 18:33:06.511040  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1016 18:33:06.621301  291068 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.183844207s)
	I1016 18:33:06.621331  291068 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1016 18:33:06.622254  291068 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.885356272s)
	I1016 18:33:06.622858  291068 node_ready.go:35] waiting up to 6m0s for node "addons-303264" to be "Ready" ...
	I1016 18:33:06.623024  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.440340782s)
	I1016 18:33:06.807679  291068 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1016 18:33:06.807706  291068 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1016 18:33:07.005935  291068 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1016 18:33:07.006007  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1016 18:33:07.128375  291068 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-303264" context rescaled to 1 replicas
	I1016 18:33:07.218548  291068 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1016 18:33:07.218613  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1016 18:33:07.398038  291068 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1016 18:33:07.398106  291068 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1016 18:33:07.608248  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1016 18:33:08.627125  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:10.312329  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.03757901s)
	I1016 18:33:10.312367  291068 addons.go:479] Verifying addon ingress=true in "addons-303264"
	I1016 18:33:10.312531  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.982851627s)
	I1016 18:33:10.312675  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.977575665s)
	I1016 18:33:10.312721  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.907890973s)
	I1016 18:33:10.312781  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.875879765s)
	I1016 18:33:10.312844  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.827978646s)
	I1016 18:33:10.312897  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.824977322s)
	I1016 18:33:10.313010  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.755974911s)
	I1016 18:33:10.313090  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.551304871s)
	W1016 18:33:10.313108  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:10.313174  291068 retry.go:31] will retry after 312.12427ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:10.313213  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.355311742s)
	I1016 18:33:10.313242  291068 addons.go:479] Verifying addon registry=true in "addons-303264"
	I1016 18:33:10.313347  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.172734351s)
	I1016 18:33:10.313362  291068 addons.go:479] Verifying addon metrics-server=true in "addons-303264"
	I1016 18:33:10.313448  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.145858329s)
	W1016 18:33:10.313461  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1016 18:33:10.313470  291068 retry.go:31] will retry after 261.192275ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1016 18:33:10.313524  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.138117063s)
	I1016 18:33:10.315669  291068 out.go:179] * Verifying ingress addon...
	I1016 18:33:10.320322  291068 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1016 18:33:10.321112  291068 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-303264 service yakd-dashboard -n yakd-dashboard
	
	I1016 18:33:10.321265  291068 out.go:179] * Verifying registry addon...
	I1016 18:33:10.324697  291068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1016 18:33:10.328017  291068 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1016 18:33:10.328034  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:10.333427  291068 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1016 18:33:10.333444  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 18:33:10.335137  291068 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1016 18:33:10.575032  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1016 18:33:10.578634  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.970295555s)
	I1016 18:33:10.578670  291068 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-303264"
	I1016 18:33:10.581764  291068 out.go:179] * Verifying csi-hostpath-driver addon...
	I1016 18:33:10.585510  291068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1016 18:33:10.601063  291068 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1016 18:33:10.601085  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:10.625654  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:10.824274  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:10.827889  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:11.090069  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:11.127111  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:11.324368  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:11.327817  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:11.590741  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:11.824075  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:11.827523  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:11.914839  291068 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1016 18:33:11.914927  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:11.933920  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:12.047504  291068 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1016 18:33:12.061553  291068 addons.go:238] Setting addon gcp-auth=true in "addons-303264"
	I1016 18:33:12.061648  291068 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:33:12.062113  291068 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:33:12.089639  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:12.090028  291068 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1016 18:33:12.090081  291068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:33:12.107482  291068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:33:12.323722  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:12.327198  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:12.589417  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:12.823414  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:12.827934  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:13.090542  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:13.127224  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:13.324232  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:13.328028  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:13.354321  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.779237805s)
	I1016 18:33:13.354449  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.728770468s)
	I1016 18:33:13.354521  291068 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.264472702s)
	W1016 18:33:13.354666  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:13.354694  291068 retry.go:31] will retry after 253.939228ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:13.357533  291068 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1016 18:33:13.360627  291068 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1016 18:33:13.363576  291068 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1016 18:33:13.363602  291068 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1016 18:33:13.378604  291068 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1016 18:33:13.378627  291068 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1016 18:33:13.391735  291068 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1016 18:33:13.391805  291068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1016 18:33:13.407042  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1016 18:33:13.590379  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:13.609756  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:13.826948  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:13.837826  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:14.059579  291068 addons.go:479] Verifying addon gcp-auth=true in "addons-303264"
	I1016 18:33:14.062498  291068 out.go:179] * Verifying gcp-auth addon...
	I1016 18:33:14.066260  291068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1016 18:33:14.074654  291068 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1016 18:33:14.074728  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:14.175110  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:14.324189  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:14.327719  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 18:33:14.543256  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:14.543293  291068 retry.go:31] will retry after 535.687382ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:14.570043  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:14.589456  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:14.823594  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:14.829065  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:15.072337  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:15.079676  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:15.089948  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:15.323327  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:15.327976  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:15.569658  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:15.589335  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:15.626767  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:15.826729  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:15.830464  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 18:33:15.887491  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:15.887522  291068 retry.go:31] will retry after 1.254627435s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:16.070272  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:16.089399  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:16.324096  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:16.327583  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:16.573282  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:16.589073  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:16.823873  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:16.827361  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:17.072707  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:17.088541  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:17.142530  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:17.324483  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:17.328187  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:17.570190  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:17.589520  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:17.824892  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:17.827128  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 18:33:17.946142  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:17.946176  291068 retry.go:31] will retry after 1.306011986s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:18.069112  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:18.089001  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:18.125866  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:18.323969  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:18.327212  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:18.569425  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:18.589111  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:18.823617  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:18.828210  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:19.072770  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:19.088659  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:19.253330  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:19.324627  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:19.337857  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:19.570008  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:19.588979  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:19.824556  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:19.827505  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:20.070516  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1016 18:33:20.086827  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:20.086862  291068 retry.go:31] will retry after 2.363936981s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:20.089462  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:20.126307  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:20.324090  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:20.327809  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:20.569811  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:20.588674  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:20.823066  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:20.827358  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:21.070481  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:21.089221  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:21.324110  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:21.327705  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:21.569743  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:21.589850  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:21.824378  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:21.827809  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:22.070998  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:22.088911  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:22.323825  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:22.327985  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:22.451297  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:22.570254  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:22.588917  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:22.625685  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:22.824613  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:22.828411  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:23.072877  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:23.089912  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:23.318537  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:23.318580  291068 retry.go:31] will retry after 2.580885903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:23.323834  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:23.327502  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:23.569202  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:23.588945  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:23.824090  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:23.828257  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:24.071263  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:24.089933  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:24.324378  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:24.328083  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:24.570252  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:24.589216  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:24.625866  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:24.823987  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:24.827394  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:25.070412  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:25.089341  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:25.323886  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:25.328595  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:25.569979  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:25.589209  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:25.824524  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:25.828126  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:25.900499  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:26.070773  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:26.089271  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:26.323023  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:26.327815  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:26.569887  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:26.588988  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:26.709317  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:26.709350  291068 retry.go:31] will retry after 2.380864454s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:26.823480  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:26.828419  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:27.070847  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:27.088764  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:27.126591  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:27.323726  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:27.327283  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:27.569561  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:27.588781  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:27.824101  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:27.827746  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:28.070985  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:28.088988  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:28.323469  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:28.327775  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:28.569893  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:28.589233  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:28.825417  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:28.827372  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:29.070030  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:29.089169  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:29.091223  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1016 18:33:29.127024  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:29.325425  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:29.328124  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:29.569870  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:29.589410  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:29.825835  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:29.829000  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 18:33:29.925601  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:29.925660  291068 retry.go:31] will retry after 8.291322723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:30.081962  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:30.089749  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:30.324035  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:30.327804  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:30.570037  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:30.589041  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:30.824069  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:30.827448  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:31.072403  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:31.088506  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:31.323900  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:31.327147  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:31.569101  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:31.589270  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:31.625961  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:31.824420  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:31.827705  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:32.072183  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:32.089372  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:32.323444  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:32.328063  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:32.569233  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:32.588585  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:32.823164  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:32.827787  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:33.070103  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:33.089571  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:33.323892  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:33.327309  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:33.569074  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:33.588917  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:33.823887  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:33.828311  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:34.071331  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:34.089443  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:34.127010  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:34.324305  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:34.330587  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:34.569640  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:34.588452  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:34.824336  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:34.827891  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:35.072561  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:35.089379  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:35.323802  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:35.328216  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:35.570055  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:35.588736  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:35.823537  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:35.828143  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:36.071665  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:36.088499  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:36.324030  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:36.327545  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:36.569920  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:36.588789  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:36.626263  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:36.824047  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:36.827365  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:37.070568  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:37.089733  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:37.323840  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:37.327492  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:37.569476  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:37.588476  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:37.823944  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:37.827325  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:38.071642  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:38.089106  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:38.217334  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:38.323899  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:38.327580  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:38.569792  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:38.589055  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:38.626551  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:38.824284  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:38.827626  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 18:33:39.020293  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:39.020328  291068 retry.go:31] will retry after 9.933327258s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:39.070674  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:39.089125  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:39.323949  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:39.327303  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:39.570002  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:39.589366  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:39.823643  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:39.828135  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:40.074094  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:40.089233  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:40.323234  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:40.327735  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:40.569727  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:40.588590  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:40.823982  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:40.827442  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:41.069354  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:41.089375  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:41.126164  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:41.324668  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:41.328180  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:41.569423  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:41.590127  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:41.824186  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:41.827593  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:42.069743  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:42.088888  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:42.324143  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:42.327841  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:42.569777  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:42.588772  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:42.823528  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:42.828241  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:43.069511  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:43.088395  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:43.126270  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:43.323393  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:43.327986  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:43.570225  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:43.588992  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:43.823731  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:43.828036  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:44.071786  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:44.089345  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:44.323902  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:44.327673  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:44.570235  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:44.589124  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:44.823931  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:44.827117  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:45.082762  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:45.096257  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 18:33:45.127584  291068 node_ready.go:57] node "addons-303264" has "Ready":"False" status (will retry)
	I1016 18:33:45.327894  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:45.332161  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:45.570192  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:45.588952  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:45.832328  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:45.843364  291068 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1016 18:33:45.843390  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:46.122153  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:46.123346  291068 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1016 18:33:46.123384  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:46.131973  291068 node_ready.go:49] node "addons-303264" is "Ready"
	I1016 18:33:46.132004  291068 node_ready.go:38] duration metric: took 39.509119258s for node "addons-303264" to be "Ready" ...
	I1016 18:33:46.132018  291068 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:33:46.132075  291068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:33:46.157075  291068 api_server.go:72] duration metric: took 42.234316237s to wait for apiserver process to appear ...
	I1016 18:33:46.157100  291068 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:33:46.157121  291068 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:33:46.177886  291068 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1016 18:33:46.182466  291068 api_server.go:141] control plane version: v1.34.1
	I1016 18:33:46.182496  291068 api_server.go:131] duration metric: took 25.388853ms to wait for apiserver health ...
	I1016 18:33:46.182506  291068 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:33:46.215655  291068 system_pods.go:59] 19 kube-system pods found
	I1016 18:33:46.215697  291068 system_pods.go:61] "coredns-66bc5c9577-8ztvw" [39553a90-b0aa-4683-abfe-867cb5c35ca2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:33:46.215707  291068 system_pods.go:61] "csi-hostpath-attacher-0" [9778b6d4-35ad-4e1a-9cf9-e68872db8da2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1016 18:33:46.215716  291068 system_pods.go:61] "csi-hostpath-resizer-0" [fbd0e89f-2c7d-4789-9747-9c121ae74bf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1016 18:33:46.215723  291068 system_pods.go:61] "csi-hostpathplugin-5z9bs" [03d5d6c8-db8c-449a-ba7a-8bdb9825c3a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1016 18:33:46.215728  291068 system_pods.go:61] "etcd-addons-303264" [a11fd941-580f-4bf5-b3b0-f63f082b7ea4] Running
	I1016 18:33:46.215733  291068 system_pods.go:61] "kindnet-mbblc" [7b8d0f9b-d177-4af5-85b5-ccd94f3a0449] Running
	I1016 18:33:46.215738  291068 system_pods.go:61] "kube-apiserver-addons-303264" [f18f501b-1831-40e2-8f9d-e5e92fa0b9dc] Running
	I1016 18:33:46.215743  291068 system_pods.go:61] "kube-controller-manager-addons-303264" [c1f2a093-2eb1-48d4-90ce-74fb0a24ee8a] Running
	I1016 18:33:46.215748  291068 system_pods.go:61] "kube-ingress-dns-minikube" [4c985e3a-06af-43df-b8cb-3e52efd16bcb] Pending
	I1016 18:33:46.215752  291068 system_pods.go:61] "kube-proxy-vfskf" [a0e25247-8b51-483a-8f53-8243d41ef9b5] Running
	I1016 18:33:46.215759  291068 system_pods.go:61] "kube-scheduler-addons-303264" [f7908d6d-be06-4cbf-8b15-7b43f4c72627] Running
	I1016 18:33:46.215765  291068 system_pods.go:61] "metrics-server-85b7d694d7-2pqhh" [39e00c5f-539c-4f89-8610-7975265868ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 18:33:46.215776  291068 system_pods.go:61] "nvidia-device-plugin-daemonset-frsg8" [9b71f6fc-8aad-4d80-b73c-bc6df9bd0a6d] Pending
	I1016 18:33:46.215784  291068 system_pods.go:61] "registry-6b586f9694-tt65k" [25f718b4-be75-437f-a793-49619e3a4306] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 18:33:46.215792  291068 system_pods.go:61] "registry-creds-764b6fb674-25wdq" [2264cbde-5cda-424e-8a82-3fc4b7eeafe2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 18:33:46.215800  291068 system_pods.go:61] "registry-proxy-jktvf" [e60cff58-6e3a-4e66-90e2-ebcb83be567a] Pending
	I1016 18:33:46.215809  291068 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7ncgr" [225729d2-76cb-40c0-bba9-78908c09c591] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 18:33:46.215821  291068 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9gxlc" [81b637c9-900e-4ffd-92fb-785bc9414d6f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 18:33:46.215826  291068 system_pods.go:61] "storage-provisioner" [4bd1d8bb-9204-4426-a2be-f6fd29a6f308] Pending
	I1016 18:33:46.215834  291068 system_pods.go:74] duration metric: took 33.321701ms to wait for pod list to return data ...
	I1016 18:33:46.215846  291068 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:33:46.219628  291068 default_sa.go:45] found service account: "default"
	I1016 18:33:46.219653  291068 default_sa.go:55] duration metric: took 3.800858ms for default service account to be created ...
	I1016 18:33:46.219662  291068 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 18:33:46.236967  291068 system_pods.go:86] 19 kube-system pods found
	I1016 18:33:46.237006  291068 system_pods.go:89] "coredns-66bc5c9577-8ztvw" [39553a90-b0aa-4683-abfe-867cb5c35ca2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:33:46.237015  291068 system_pods.go:89] "csi-hostpath-attacher-0" [9778b6d4-35ad-4e1a-9cf9-e68872db8da2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1016 18:33:46.237024  291068 system_pods.go:89] "csi-hostpath-resizer-0" [fbd0e89f-2c7d-4789-9747-9c121ae74bf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1016 18:33:46.237030  291068 system_pods.go:89] "csi-hostpathplugin-5z9bs" [03d5d6c8-db8c-449a-ba7a-8bdb9825c3a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1016 18:33:46.237035  291068 system_pods.go:89] "etcd-addons-303264" [a11fd941-580f-4bf5-b3b0-f63f082b7ea4] Running
	I1016 18:33:46.237040  291068 system_pods.go:89] "kindnet-mbblc" [7b8d0f9b-d177-4af5-85b5-ccd94f3a0449] Running
	I1016 18:33:46.237044  291068 system_pods.go:89] "kube-apiserver-addons-303264" [f18f501b-1831-40e2-8f9d-e5e92fa0b9dc] Running
	I1016 18:33:46.237048  291068 system_pods.go:89] "kube-controller-manager-addons-303264" [c1f2a093-2eb1-48d4-90ce-74fb0a24ee8a] Running
	I1016 18:33:46.237053  291068 system_pods.go:89] "kube-ingress-dns-minikube" [4c985e3a-06af-43df-b8cb-3e52efd16bcb] Pending
	I1016 18:33:46.237061  291068 system_pods.go:89] "kube-proxy-vfskf" [a0e25247-8b51-483a-8f53-8243d41ef9b5] Running
	I1016 18:33:46.237067  291068 system_pods.go:89] "kube-scheduler-addons-303264" [f7908d6d-be06-4cbf-8b15-7b43f4c72627] Running
	I1016 18:33:46.237079  291068 system_pods.go:89] "metrics-server-85b7d694d7-2pqhh" [39e00c5f-539c-4f89-8610-7975265868ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 18:33:46.237083  291068 system_pods.go:89] "nvidia-device-plugin-daemonset-frsg8" [9b71f6fc-8aad-4d80-b73c-bc6df9bd0a6d] Pending
	I1016 18:33:46.237090  291068 system_pods.go:89] "registry-6b586f9694-tt65k" [25f718b4-be75-437f-a793-49619e3a4306] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 18:33:46.237099  291068 system_pods.go:89] "registry-creds-764b6fb674-25wdq" [2264cbde-5cda-424e-8a82-3fc4b7eeafe2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 18:33:46.237104  291068 system_pods.go:89] "registry-proxy-jktvf" [e60cff58-6e3a-4e66-90e2-ebcb83be567a] Pending
	I1016 18:33:46.237112  291068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7ncgr" [225729d2-76cb-40c0-bba9-78908c09c591] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 18:33:46.237123  291068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9gxlc" [81b637c9-900e-4ffd-92fb-785bc9414d6f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 18:33:46.237127  291068 system_pods.go:89] "storage-provisioner" [4bd1d8bb-9204-4426-a2be-f6fd29a6f308] Pending
	I1016 18:33:46.237205  291068 retry.go:31] will retry after 264.166941ms: missing components: kube-dns
	I1016 18:33:46.335731  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:46.338492  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:46.517581  291068 system_pods.go:86] 19 kube-system pods found
	I1016 18:33:46.517622  291068 system_pods.go:89] "coredns-66bc5c9577-8ztvw" [39553a90-b0aa-4683-abfe-867cb5c35ca2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:33:46.517631  291068 system_pods.go:89] "csi-hostpath-attacher-0" [9778b6d4-35ad-4e1a-9cf9-e68872db8da2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1016 18:33:46.517641  291068 system_pods.go:89] "csi-hostpath-resizer-0" [fbd0e89f-2c7d-4789-9747-9c121ae74bf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1016 18:33:46.517649  291068 system_pods.go:89] "csi-hostpathplugin-5z9bs" [03d5d6c8-db8c-449a-ba7a-8bdb9825c3a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1016 18:33:46.517654  291068 system_pods.go:89] "etcd-addons-303264" [a11fd941-580f-4bf5-b3b0-f63f082b7ea4] Running
	I1016 18:33:46.517660  291068 system_pods.go:89] "kindnet-mbblc" [7b8d0f9b-d177-4af5-85b5-ccd94f3a0449] Running
	I1016 18:33:46.517665  291068 system_pods.go:89] "kube-apiserver-addons-303264" [f18f501b-1831-40e2-8f9d-e5e92fa0b9dc] Running
	I1016 18:33:46.517681  291068 system_pods.go:89] "kube-controller-manager-addons-303264" [c1f2a093-2eb1-48d4-90ce-74fb0a24ee8a] Running
	I1016 18:33:46.517693  291068 system_pods.go:89] "kube-ingress-dns-minikube" [4c985e3a-06af-43df-b8cb-3e52efd16bcb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1016 18:33:46.517698  291068 system_pods.go:89] "kube-proxy-vfskf" [a0e25247-8b51-483a-8f53-8243d41ef9b5] Running
	I1016 18:33:46.517703  291068 system_pods.go:89] "kube-scheduler-addons-303264" [f7908d6d-be06-4cbf-8b15-7b43f4c72627] Running
	I1016 18:33:46.517709  291068 system_pods.go:89] "metrics-server-85b7d694d7-2pqhh" [39e00c5f-539c-4f89-8610-7975265868ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 18:33:46.517718  291068 system_pods.go:89] "nvidia-device-plugin-daemonset-frsg8" [9b71f6fc-8aad-4d80-b73c-bc6df9bd0a6d] Pending
	I1016 18:33:46.517725  291068 system_pods.go:89] "registry-6b586f9694-tt65k" [25f718b4-be75-437f-a793-49619e3a4306] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 18:33:46.517730  291068 system_pods.go:89] "registry-creds-764b6fb674-25wdq" [2264cbde-5cda-424e-8a82-3fc4b7eeafe2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 18:33:46.517739  291068 system_pods.go:89] "registry-proxy-jktvf" [e60cff58-6e3a-4e66-90e2-ebcb83be567a] Pending
	I1016 18:33:46.517747  291068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7ncgr" [225729d2-76cb-40c0-bba9-78908c09c591] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 18:33:46.517754  291068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9gxlc" [81b637c9-900e-4ffd-92fb-785bc9414d6f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 18:33:46.517763  291068 system_pods.go:89] "storage-provisioner" [4bd1d8bb-9204-4426-a2be-f6fd29a6f308] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:33:46.517779  291068 retry.go:31] will retry after 261.532262ms: missing components: kube-dns
	I1016 18:33:46.616822  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:46.618229  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:46.784580  291068 system_pods.go:86] 19 kube-system pods found
	I1016 18:33:46.784617  291068 system_pods.go:89] "coredns-66bc5c9577-8ztvw" [39553a90-b0aa-4683-abfe-867cb5c35ca2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:33:46.784627  291068 system_pods.go:89] "csi-hostpath-attacher-0" [9778b6d4-35ad-4e1a-9cf9-e68872db8da2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1016 18:33:46.784636  291068 system_pods.go:89] "csi-hostpath-resizer-0" [fbd0e89f-2c7d-4789-9747-9c121ae74bf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1016 18:33:46.784651  291068 system_pods.go:89] "csi-hostpathplugin-5z9bs" [03d5d6c8-db8c-449a-ba7a-8bdb9825c3a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1016 18:33:46.784659  291068 system_pods.go:89] "etcd-addons-303264" [a11fd941-580f-4bf5-b3b0-f63f082b7ea4] Running
	I1016 18:33:46.784665  291068 system_pods.go:89] "kindnet-mbblc" [7b8d0f9b-d177-4af5-85b5-ccd94f3a0449] Running
	I1016 18:33:46.784670  291068 system_pods.go:89] "kube-apiserver-addons-303264" [f18f501b-1831-40e2-8f9d-e5e92fa0b9dc] Running
	I1016 18:33:46.784678  291068 system_pods.go:89] "kube-controller-manager-addons-303264" [c1f2a093-2eb1-48d4-90ce-74fb0a24ee8a] Running
	I1016 18:33:46.784685  291068 system_pods.go:89] "kube-ingress-dns-minikube" [4c985e3a-06af-43df-b8cb-3e52efd16bcb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1016 18:33:46.784692  291068 system_pods.go:89] "kube-proxy-vfskf" [a0e25247-8b51-483a-8f53-8243d41ef9b5] Running
	I1016 18:33:46.784697  291068 system_pods.go:89] "kube-scheduler-addons-303264" [f7908d6d-be06-4cbf-8b15-7b43f4c72627] Running
	I1016 18:33:46.784703  291068 system_pods.go:89] "metrics-server-85b7d694d7-2pqhh" [39e00c5f-539c-4f89-8610-7975265868ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 18:33:46.784709  291068 system_pods.go:89] "nvidia-device-plugin-daemonset-frsg8" [9b71f6fc-8aad-4d80-b73c-bc6df9bd0a6d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1016 18:33:46.784719  291068 system_pods.go:89] "registry-6b586f9694-tt65k" [25f718b4-be75-437f-a793-49619e3a4306] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 18:33:46.784725  291068 system_pods.go:89] "registry-creds-764b6fb674-25wdq" [2264cbde-5cda-424e-8a82-3fc4b7eeafe2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 18:33:46.784731  291068 system_pods.go:89] "registry-proxy-jktvf" [e60cff58-6e3a-4e66-90e2-ebcb83be567a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1016 18:33:46.784738  291068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7ncgr" [225729d2-76cb-40c0-bba9-78908c09c591] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 18:33:46.784749  291068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9gxlc" [81b637c9-900e-4ffd-92fb-785bc9414d6f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 18:33:46.784755  291068 system_pods.go:89] "storage-provisioner" [4bd1d8bb-9204-4426-a2be-f6fd29a6f308] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:33:46.784772  291068 retry.go:31] will retry after 406.660384ms: missing components: kube-dns
	I1016 18:33:46.823940  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:46.831156  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:47.072370  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:47.089562  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:47.197344  291068 system_pods.go:86] 19 kube-system pods found
	I1016 18:33:47.197386  291068 system_pods.go:89] "coredns-66bc5c9577-8ztvw" [39553a90-b0aa-4683-abfe-867cb5c35ca2] Running
	I1016 18:33:47.197397  291068 system_pods.go:89] "csi-hostpath-attacher-0" [9778b6d4-35ad-4e1a-9cf9-e68872db8da2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1016 18:33:47.197431  291068 system_pods.go:89] "csi-hostpath-resizer-0" [fbd0e89f-2c7d-4789-9747-9c121ae74bf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1016 18:33:47.197446  291068 system_pods.go:89] "csi-hostpathplugin-5z9bs" [03d5d6c8-db8c-449a-ba7a-8bdb9825c3a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1016 18:33:47.197452  291068 system_pods.go:89] "etcd-addons-303264" [a11fd941-580f-4bf5-b3b0-f63f082b7ea4] Running
	I1016 18:33:47.197457  291068 system_pods.go:89] "kindnet-mbblc" [7b8d0f9b-d177-4af5-85b5-ccd94f3a0449] Running
	I1016 18:33:47.197465  291068 system_pods.go:89] "kube-apiserver-addons-303264" [f18f501b-1831-40e2-8f9d-e5e92fa0b9dc] Running
	I1016 18:33:47.197470  291068 system_pods.go:89] "kube-controller-manager-addons-303264" [c1f2a093-2eb1-48d4-90ce-74fb0a24ee8a] Running
	I1016 18:33:47.197476  291068 system_pods.go:89] "kube-ingress-dns-minikube" [4c985e3a-06af-43df-b8cb-3e52efd16bcb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1016 18:33:47.197511  291068 system_pods.go:89] "kube-proxy-vfskf" [a0e25247-8b51-483a-8f53-8243d41ef9b5] Running
	I1016 18:33:47.197525  291068 system_pods.go:89] "kube-scheduler-addons-303264" [f7908d6d-be06-4cbf-8b15-7b43f4c72627] Running
	I1016 18:33:47.197532  291068 system_pods.go:89] "metrics-server-85b7d694d7-2pqhh" [39e00c5f-539c-4f89-8610-7975265868ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 18:33:47.197539  291068 system_pods.go:89] "nvidia-device-plugin-daemonset-frsg8" [9b71f6fc-8aad-4d80-b73c-bc6df9bd0a6d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1016 18:33:47.197551  291068 system_pods.go:89] "registry-6b586f9694-tt65k" [25f718b4-be75-437f-a793-49619e3a4306] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 18:33:47.197557  291068 system_pods.go:89] "registry-creds-764b6fb674-25wdq" [2264cbde-5cda-424e-8a82-3fc4b7eeafe2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 18:33:47.197566  291068 system_pods.go:89] "registry-proxy-jktvf" [e60cff58-6e3a-4e66-90e2-ebcb83be567a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1016 18:33:47.197591  291068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7ncgr" [225729d2-76cb-40c0-bba9-78908c09c591] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 18:33:47.197607  291068 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9gxlc" [81b637c9-900e-4ffd-92fb-785bc9414d6f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 18:33:47.197612  291068 system_pods.go:89] "storage-provisioner" [4bd1d8bb-9204-4426-a2be-f6fd29a6f308] Running
	I1016 18:33:47.197636  291068 system_pods.go:126] duration metric: took 977.967894ms to wait for k8s-apps to be running ...
	I1016 18:33:47.197646  291068 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 18:33:47.197720  291068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:33:47.223236  291068 system_svc.go:56] duration metric: took 25.580082ms WaitForService to wait for kubelet
	I1016 18:33:47.223262  291068 kubeadm.go:586] duration metric: took 43.300508563s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:33:47.223300  291068 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:33:47.226425  291068 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 18:33:47.226459  291068 node_conditions.go:123] node cpu capacity is 2
	I1016 18:33:47.226473  291068 node_conditions.go:105] duration metric: took 3.160904ms to run NodePressure ...
	I1016 18:33:47.226508  291068 start.go:241] waiting for startup goroutines ...
	I1016 18:33:47.323552  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:47.328312  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:47.569884  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:47.589874  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:47.824226  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:47.827829  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:48.073896  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:48.090157  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:48.324266  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:48.327448  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:48.569637  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:48.589608  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:48.824148  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:48.828610  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:48.953903  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:33:49.070248  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:49.098310  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:49.323878  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:49.327888  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:49.570957  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:49.589909  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:49.823393  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:49.827824  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:50.073810  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:50.090109  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:50.094174  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.140228403s)
	W1016 18:33:50.094259  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:50.094292  291068 retry.go:31] will retry after 10.297449613s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:33:50.323296  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:50.327620  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:50.570208  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:50.590409  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:50.824167  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:50.827669  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:51.070682  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:51.090624  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:51.323886  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:51.327861  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:51.570851  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:51.602571  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:51.824639  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:51.830934  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:52.074305  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:52.097871  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:52.329810  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:52.330593  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:52.572861  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:52.590790  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:52.824085  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:52.828901  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:53.073588  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:53.091175  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:53.325694  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:53.328976  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:53.578647  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:53.589676  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:53.824611  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:53.828644  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:54.070618  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:54.089784  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:54.323935  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:54.327918  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:54.570599  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:54.589696  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:54.824314  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:54.828054  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:55.074305  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:55.090420  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:55.323925  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:55.327976  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:55.570083  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:55.589684  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:55.823735  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:55.828625  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:56.073218  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:56.089810  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:56.324031  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:56.328134  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:56.570165  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:56.589791  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:56.824286  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:56.828070  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:57.070739  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:57.089435  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:57.324419  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:57.328454  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:57.569415  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:57.589274  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:57.823746  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:57.827413  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:58.074242  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:58.089804  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:58.325263  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:58.328020  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:58.570480  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:58.589606  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:58.824494  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:58.828220  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:59.071765  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:59.089828  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:59.324135  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:59.328205  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:33:59.570765  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:33:59.589620  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:33:59.824393  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:33:59.828251  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:00.107140  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:00.109429  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:00.329120  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:00.332455  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:00.392800  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:34:00.570115  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:00.590315  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:00.823374  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:00.828200  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:01.070674  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:01.089277  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:01.324483  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:01.328419  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:01.501551  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.108709299s)
	W1016 18:34:01.501590  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:34:01.501608  291068 retry.go:31] will retry after 16.143036034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:34:01.577914  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:01.610499  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:01.824485  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:01.828459  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:02.070785  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:02.088974  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:02.324370  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:02.328318  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:02.569818  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:02.591097  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:02.823910  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:02.827465  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:03.084966  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:03.101206  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:03.411424  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:03.411679  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:03.569859  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:03.589299  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:03.823943  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:03.827684  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:04.069925  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:04.089240  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:04.323676  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:04.327471  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:04.569686  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:04.589210  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:04.823276  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:04.827773  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:05.071355  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:05.089921  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:05.324465  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:05.330449  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:05.570011  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:05.589316  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:05.827118  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:05.828839  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:06.070583  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:06.088762  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:06.324072  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:06.327629  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:06.569850  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:06.588886  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:06.824251  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:06.827827  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:07.077627  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:07.089235  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:07.325360  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:07.332140  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:07.568948  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:07.589411  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:07.823598  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:07.828653  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:08.078024  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:08.088828  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:08.326251  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:08.328819  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:08.570219  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:08.588788  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:08.824152  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:08.828134  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:09.074336  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:09.095304  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:09.323382  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:09.327825  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:09.570141  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:09.589547  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:09.823617  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:09.828176  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:10.070947  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:10.102322  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:10.323872  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:10.327539  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:10.570051  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:10.590933  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:10.824369  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:10.828594  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:11.072744  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:11.090478  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:11.324330  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:11.327720  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:11.570098  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:11.592740  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:11.824524  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:11.828253  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:12.069928  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:12.088900  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:12.324186  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:12.328085  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:12.569621  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:12.590137  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:12.824525  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:12.828290  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:13.070054  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:13.089825  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:13.325403  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:13.327801  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:13.570439  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:13.589008  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:13.824378  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:13.828153  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:14.070937  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:14.103738  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:14.324418  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:14.328023  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:14.574503  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:14.598166  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:14.824911  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:14.827630  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:15.073966  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:15.091730  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:15.325309  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:15.328399  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:15.569771  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:15.588887  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:15.825699  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:15.828114  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:16.070770  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:16.089351  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:16.323669  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:16.327421  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:16.569323  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:16.590092  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:16.824486  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:16.828168  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:17.070686  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:17.088918  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:17.325199  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:17.426125  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:17.569117  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:17.589186  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:17.645564  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:34:17.824402  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:17.828250  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:18.072637  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:18.091773  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:18.324665  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:18.329093  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:18.571249  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:18.588936  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:18.707635  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.06203214s)
	W1016 18:34:18.707669  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:34:18.707690  291068 retry.go:31] will retry after 43.779470207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 18:34:18.823952  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:18.828426  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:19.071333  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:19.089925  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:19.324115  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:19.327756  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:19.570527  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:19.589175  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:19.824913  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:19.827862  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:20.075019  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:20.090053  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:20.323981  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:20.329565  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:20.569772  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:20.589164  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:20.823770  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:20.827735  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:21.074886  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:21.094149  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:21.323591  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:21.328541  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:21.570353  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:21.591033  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:21.825101  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:21.827598  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:22.070134  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:22.090169  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:22.323881  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:22.327416  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:22.569877  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:22.589728  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:22.823782  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:22.827818  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:23.070972  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:23.089757  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:23.325643  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:23.328152  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:23.570998  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:23.589949  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:23.824274  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:23.828077  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:24.070694  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:24.088806  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:24.324001  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:24.327683  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:24.570550  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:24.589347  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:24.823343  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:24.828100  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:25.069629  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:25.088890  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:25.324693  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:25.328654  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:25.575940  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:25.589465  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:25.823672  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:25.827435  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:26.070925  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:26.092476  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:26.323729  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:26.331950  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:26.569951  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:26.589553  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:26.823464  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:26.828057  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:27.071299  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:27.090539  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:27.324590  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:27.328372  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:27.570650  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:27.589007  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:27.824193  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:27.827473  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:28.071851  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:28.093912  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:28.325442  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:28.337254  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:28.571296  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:28.591299  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:28.824670  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:28.828477  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:29.069739  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:29.088748  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:29.324617  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:29.328244  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:29.569454  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:29.589083  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:29.824038  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:29.827479  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:30.098703  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:30.100106  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:30.327106  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:30.328594  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:30.569630  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:30.589391  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:30.824481  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:30.828221  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:31.072560  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:31.089016  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:31.330587  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:31.330766  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:31.570317  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:31.589623  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:31.823794  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:31.828262  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:32.069253  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:32.089365  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:32.324406  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:32.327976  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:32.570232  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:32.589841  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:32.824846  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:32.827791  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:33.070398  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:33.089584  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:33.324618  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:33.328495  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:33.570426  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:33.590776  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:33.829543  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:33.829675  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:34.069834  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:34.089399  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:34.323384  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:34.328374  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:34.569341  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:34.589558  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:34.823610  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:34.828390  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 18:34:35.071476  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:35.089015  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:35.323999  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:35.327670  291068 kapi.go:107] duration metric: took 1m25.002972197s to wait for kubernetes.io/minikube-addons=registry ...
	I1016 18:34:35.569847  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:35.588866  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:35.824196  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:36.069983  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:36.089265  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:36.326162  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:36.570629  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:36.589221  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:36.824156  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:37.070655  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:37.090465  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:37.323806  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:37.569547  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:37.589692  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:37.828924  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:38.072336  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:38.091062  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:38.324089  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:38.570694  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:38.589024  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:38.823866  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:39.071644  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:39.089786  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:39.325057  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:39.569401  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:39.590608  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:39.823820  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:40.088077  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:40.091533  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:40.325628  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:40.570700  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:40.589542  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:40.823708  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:41.082495  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:41.095970  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:41.327806  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:41.572922  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:41.622486  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:41.824262  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:42.069806  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:42.090127  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:42.327138  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:42.569488  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:42.589300  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:42.824690  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:43.071095  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:43.090071  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:43.327306  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:43.570525  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:43.589486  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:43.824014  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:44.070380  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:44.089411  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:44.323621  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:44.578329  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:44.610442  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:44.824115  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:45.101646  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:45.103027  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:45.328284  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:45.569861  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:45.591273  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:45.823411  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:46.069861  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:46.089846  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:46.323813  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:46.569912  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:46.589017  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:46.824331  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:47.069759  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:47.088921  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:47.324976  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:47.570244  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:47.590477  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:47.829234  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:48.069994  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:48.089627  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:48.324672  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:48.571237  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:48.590431  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:48.823894  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:49.075079  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:49.092979  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:49.330784  291068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 18:34:49.570696  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:49.589102  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:49.825196  291068 kapi.go:107] duration metric: took 1m39.504874132s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1016 18:34:50.075563  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:50.176322  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:50.570114  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:50.590284  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:51.075987  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:51.089088  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:51.570209  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:51.590033  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:52.071255  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:52.089789  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:52.569850  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:52.589629  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:53.070359  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:53.089800  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:53.570404  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:53.589585  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:54.070130  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:54.089011  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:54.569547  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:54.588712  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:55.069670  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:55.088969  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:55.569542  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:55.589695  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:56.070192  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:56.091487  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:56.570090  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:56.589111  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:57.070649  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:57.089097  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:57.570077  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:57.589928  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:58.071026  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:58.089283  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:58.570220  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:58.589567  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:59.069728  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:59.088760  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:34:59.569814  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:34:59.588693  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:00.094631  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:35:00.120907  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:00.570160  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:35:00.590032  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:01.069921  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:35:01.089966  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:01.571931  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:35:01.672797  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:02.072479  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 18:35:02.088870  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:02.487368  291068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 18:35:02.574151  291068 kapi.go:107] duration metric: took 1m48.507889147s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1016 18:35:02.577174  291068 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-303264 cluster.
	I1016 18:35:02.579994  291068 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1016 18:35:02.582667  291068 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1016 18:35:02.589097  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:03.089189  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:03.590231  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:03.599980  291068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.112514841s)
	W1016 18:35:03.600070  291068 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1016 18:35:03.600317  291068 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1016 18:35:04.091366  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:04.590169  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:05.089910  291068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 18:35:05.589912  291068 kapi.go:107] duration metric: took 1m55.004401735s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1016 18:35:05.593098  291068 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, amd-gpu-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1016 18:35:05.596160  291068 addons.go:514] duration metric: took 2m1.6730127s for enable addons: enabled=[registry-creds nvidia-device-plugin storage-provisioner cloud-spanner ingress-dns amd-gpu-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1016 18:35:05.596211  291068 start.go:246] waiting for cluster config update ...
	I1016 18:35:05.596233  291068 start.go:255] writing updated cluster config ...
	I1016 18:35:05.596533  291068 ssh_runner.go:195] Run: rm -f paused
	I1016 18:35:05.601027  291068 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:35:05.604849  291068 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8ztvw" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:05.612083  291068 pod_ready.go:94] pod "coredns-66bc5c9577-8ztvw" is "Ready"
	I1016 18:35:05.612112  291068 pod_ready.go:86] duration metric: took 7.234407ms for pod "coredns-66bc5c9577-8ztvw" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:05.614755  291068 pod_ready.go:83] waiting for pod "etcd-addons-303264" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:05.619653  291068 pod_ready.go:94] pod "etcd-addons-303264" is "Ready"
	I1016 18:35:05.619682  291068 pod_ready.go:86] duration metric: took 4.898085ms for pod "etcd-addons-303264" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:05.621982  291068 pod_ready.go:83] waiting for pod "kube-apiserver-addons-303264" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:05.626664  291068 pod_ready.go:94] pod "kube-apiserver-addons-303264" is "Ready"
	I1016 18:35:05.626695  291068 pod_ready.go:86] duration metric: took 4.688272ms for pod "kube-apiserver-addons-303264" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:05.630141  291068 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-303264" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:06.005066  291068 pod_ready.go:94] pod "kube-controller-manager-addons-303264" is "Ready"
	I1016 18:35:06.005093  291068 pod_ready.go:86] duration metric: took 374.920822ms for pod "kube-controller-manager-addons-303264" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:06.207516  291068 pod_ready.go:83] waiting for pod "kube-proxy-vfskf" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:06.605079  291068 pod_ready.go:94] pod "kube-proxy-vfskf" is "Ready"
	I1016 18:35:06.605106  291068 pod_ready.go:86] duration metric: took 397.561683ms for pod "kube-proxy-vfskf" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:06.805257  291068 pod_ready.go:83] waiting for pod "kube-scheduler-addons-303264" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:07.209937  291068 pod_ready.go:94] pod "kube-scheduler-addons-303264" is "Ready"
	I1016 18:35:07.209969  291068 pod_ready.go:86] duration metric: took 404.683648ms for pod "kube-scheduler-addons-303264" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:35:07.209983  291068 pod_ready.go:40] duration metric: took 1.608920977s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:35:07.265830  291068 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1016 18:35:07.269126  291068 out.go:179] * Done! kubectl is now configured to use "addons-303264" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 16 18:35:05 addons-303264 crio[833]: time="2025-10-16T18:35:05.449751788Z" level=info msg="Created container 4c854724ff606415b7b4cd338cfdcba4ae2ed9545913afdbe77836a55f682630: kube-system/csi-hostpathplugin-5z9bs/csi-snapshotter" id=473e550a-0296-498f-9d22-1bfee22be419 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:35:05 addons-303264 crio[833]: time="2025-10-16T18:35:05.451047808Z" level=info msg="Starting container: 4c854724ff606415b7b4cd338cfdcba4ae2ed9545913afdbe77836a55f682630" id=b7f90c28-1568-402d-bb20-b4fb6fad8459 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:35:05 addons-303264 crio[833]: time="2025-10-16T18:35:05.454189321Z" level=info msg="Started container" PID=4983 containerID=4c854724ff606415b7b4cd338cfdcba4ae2ed9545913afdbe77836a55f682630 description=kube-system/csi-hostpathplugin-5z9bs/csi-snapshotter id=b7f90c28-1568-402d-bb20-b4fb6fad8459 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0cb215c7536bc0678fd4871ade2b40a9d1e4a38a59322ed8ca146471f9ac2731
	Oct 16 18:35:08 addons-303264 crio[833]: time="2025-10-16T18:35:08.582276945Z" level=info msg="Running pod sandbox: default/busybox/POD" id=64b8ddb9-313d-4108-96c0-d999e19a4045 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:35:08 addons-303264 crio[833]: time="2025-10-16T18:35:08.582364124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:35:08 addons-303264 crio[833]: time="2025-10-16T18:35:08.589175854Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0635dc8f3b8c269cca64adb189e73ad09e19ea42434739dfe843cb1549532d97 UID:2399eb6b-0b70-4a46-acca-4929071138df NetNS:/var/run/netns/e8a638e0-dfff-4cf1-8cb2-e43a82f87789 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078448}] Aliases:map[]}"
	Oct 16 18:35:08 addons-303264 crio[833]: time="2025-10-16T18:35:08.589336773Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 16 18:35:08 addons-303264 crio[833]: time="2025-10-16T18:35:08.602223291Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0635dc8f3b8c269cca64adb189e73ad09e19ea42434739dfe843cb1549532d97 UID:2399eb6b-0b70-4a46-acca-4929071138df NetNS:/var/run/netns/e8a638e0-dfff-4cf1-8cb2-e43a82f87789 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078448}] Aliases:map[]}"
	Oct 16 18:35:08 addons-303264 crio[833]: time="2025-10-16T18:35:08.602519643Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 16 18:35:08 addons-303264 crio[833]: time="2025-10-16T18:35:08.606099431Z" level=info msg="Ran pod sandbox 0635dc8f3b8c269cca64adb189e73ad09e19ea42434739dfe843cb1549532d97 with infra container: default/busybox/POD" id=64b8ddb9-313d-4108-96c0-d999e19a4045 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:35:08 addons-303264 crio[833]: time="2025-10-16T18:35:08.609717021Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bdc32325-640f-4b33-b765-d9b493f1d732 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:35:08 addons-303264 crio[833]: time="2025-10-16T18:35:08.609975318Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=bdc32325-640f-4b33-b765-d9b493f1d732 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:35:08 addons-303264 crio[833]: time="2025-10-16T18:35:08.610089861Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=bdc32325-640f-4b33-b765-d9b493f1d732 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:35:08 addons-303264 crio[833]: time="2025-10-16T18:35:08.610954597Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f384a95b-2fb4-4ae3-b6ac-63535e0f6dea name=/runtime.v1.ImageService/PullImage
	Oct 16 18:35:08 addons-303264 crio[833]: time="2025-10-16T18:35:08.612099307Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 16 18:35:10 addons-303264 crio[833]: time="2025-10-16T18:35:10.48948495Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=f384a95b-2fb4-4ae3-b6ac-63535e0f6dea name=/runtime.v1.ImageService/PullImage
	Oct 16 18:35:10 addons-303264 crio[833]: time="2025-10-16T18:35:10.490095713Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=36c75537-a16f-4c16-a5d8-8f572889b678 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:35:10 addons-303264 crio[833]: time="2025-10-16T18:35:10.491901657Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=131545a6-9743-4618-b71e-b43b88c0de5d name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:35:10 addons-303264 crio[833]: time="2025-10-16T18:35:10.501047765Z" level=info msg="Creating container: default/busybox/busybox" id=5f50be52-3c1c-4f2e-bb79-ff8ae46a27ba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:35:10 addons-303264 crio[833]: time="2025-10-16T18:35:10.5022442Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:35:10 addons-303264 crio[833]: time="2025-10-16T18:35:10.515549738Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:35:10 addons-303264 crio[833]: time="2025-10-16T18:35:10.516175599Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:35:10 addons-303264 crio[833]: time="2025-10-16T18:35:10.534560442Z" level=info msg="Created container 01b3bc8a867f979226cbfa48aed006f0faa7fc4ca35d28180598202e352da36c: default/busybox/busybox" id=5f50be52-3c1c-4f2e-bb79-ff8ae46a27ba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:35:10 addons-303264 crio[833]: time="2025-10-16T18:35:10.535466597Z" level=info msg="Starting container: 01b3bc8a867f979226cbfa48aed006f0faa7fc4ca35d28180598202e352da36c" id=e57eeeab-b7bc-48c6-bc84-e9931cb3efcf name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:35:10 addons-303264 crio[833]: time="2025-10-16T18:35:10.541288135Z" level=info msg="Started container" PID=5067 containerID=01b3bc8a867f979226cbfa48aed006f0faa7fc4ca35d28180598202e352da36c description=default/busybox/busybox id=e57eeeab-b7bc-48c6-bc84-e9931cb3efcf name=/runtime.v1.RuntimeService/StartContainer sandboxID=0635dc8f3b8c269cca64adb189e73ad09e19ea42434739dfe843cb1549532d97
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	01b3bc8a867f9       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          10 seconds ago       Running             busybox                                  0                   0635dc8f3b8c2       busybox                                     default
	4c854724ff606       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          15 seconds ago       Running             csi-snapshotter                          0                   0cb215c7536bc       csi-hostpathplugin-5z9bs                    kube-system
	72c450061ca94       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          16 seconds ago       Running             csi-provisioner                          0                   0cb215c7536bc       csi-hostpathplugin-5z9bs                    kube-system
	d3c44cd5669c9       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            18 seconds ago       Running             liveness-probe                           0                   0cb215c7536bc       csi-hostpathplugin-5z9bs                    kube-system
	ed8f5ff4c7d24       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 19 seconds ago       Running             gcp-auth                                 0                   66f332b550973       gcp-auth-78565c9fb4-7stxd                   gcp-auth
	3b5b7fd1d4794       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             26 seconds ago       Exited              patch                                    3                   1961814a76c6d       gcp-auth-certs-patch-qcnqp                  gcp-auth
	2fd75860dad3e       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           30 seconds ago       Running             hostpath                                 0                   0cb215c7536bc       csi-hostpathplugin-5z9bs                    kube-system
	6eb687c5bd9ac       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             32 seconds ago       Running             controller                               0                   8b61339c51fb7       ingress-nginx-controller-675c5ddd98-l5ks7   ingress-nginx
	b85fa5b248e27       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                39 seconds ago       Running             node-driver-registrar                    0                   0cb215c7536bc       csi-hostpathplugin-5z9bs                    kube-system
	988ad1327faa5       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            40 seconds ago       Running             gadget                                   0                   da7d713ef51cc       gadget-xkdv7                                gadget
	43cc085a9644a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   44 seconds ago       Exited              create                                   0                   fbe6e0288ec33       gcp-auth-certs-create-9hkzh                 gcp-auth
	817135be1fb12       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             44 seconds ago       Running             csi-attacher                             0                   0b1a0b740b239       csi-hostpath-attacher-0                     kube-system
	cc0546bd9d12a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              46 seconds ago       Running             registry-proxy                           0                   d205eed98147d       registry-proxy-jktvf                        kube-system
	9b0f87f3e3a62       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   49 seconds ago       Exited              patch                                    0                   e3dfa7a55055c       ingress-nginx-admission-patch-ndrbx         ingress-nginx
	83e350274adee       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     49 seconds ago       Running             nvidia-device-plugin-ctr                 0                   8c7880db89b97       nvidia-device-plugin-daemonset-frsg8        kube-system
	4d4a9d8e61179       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   0cb215c7536bc       csi-hostpathplugin-5z9bs                    kube-system
	725ad79381cb1       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   0fbb003d171d5       yakd-dashboard-5ff678cb9-qzhjz              yakd-dashboard
	f4b21b5d4fe92       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   ed6255039ae21       ingress-nginx-admission-create-j7q4k        ingress-nginx
	96ac5bbeec4b1       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   3c603989b64c4       local-path-provisioner-648f6765c9-jzvjp     local-path-storage
	54a940e28a474       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   c5664f50e874c       csi-hostpath-resizer-0                      kube-system
	ddb9eebdec6b1       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   c04e6a698eb6b       registry-6b586f9694-tt65k                   kube-system
	42b57482939e2       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   17a03812a9495       kube-ingress-dns-minikube                   kube-system
	563604467d1e7       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   a139d7f50eb69       cloud-spanner-emulator-86bd5cbb97-jl554     default
	a1df688b216b8       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   ec83c190026b4       metrics-server-85b7d694d7-2pqhh             kube-system
	8049d0179c2ce       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   342d7268e20dc       snapshot-controller-7d9fbc56b8-9gxlc        kube-system
	2f9a34f263e49       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   8df027d1f4f07       snapshot-controller-7d9fbc56b8-7ncgr        kube-system
	a11803eed98f1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   979e33544a5ba       storage-provisioner                         kube-system
	2150dbabd80c7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   b34b5acd5c2d8       coredns-66bc5c9577-8ztvw                    kube-system
	a43557a0c4603       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   78e312da20470       kube-proxy-vfskf                            kube-system
	3478855350e27       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   bbdf78d3a843e       kindnet-mbblc                               kube-system
	2f7b424d8bee4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   e2caa69fcfd51       kube-scheduler-addons-303264                kube-system
	060c04d69de0b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   c42615248c708       kube-apiserver-addons-303264                kube-system
	b9c25f79f72e1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   081eb00b5824d       kube-controller-manager-addons-303264       kube-system
	014826c0f016d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   fa4ac1df76f10       etcd-addons-303264                          kube-system
	
	
	==> coredns [2150dbabd80c70b27e2ffa366b6a76822ac0da6532eef17cae4daccd51271b0b] <==
	[INFO] 10.244.0.11:53699 - 47852 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000060988s
	[INFO] 10.244.0.11:53699 - 17023 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002411706s
	[INFO] 10.244.0.11:53699 - 22731 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001800115s
	[INFO] 10.244.0.11:53699 - 52122 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000106937s
	[INFO] 10.244.0.11:53699 - 16196 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000070818s
	[INFO] 10.244.0.11:44753 - 22368 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000134383s
	[INFO] 10.244.0.11:44753 - 22155 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000161263s
	[INFO] 10.244.0.11:35921 - 9455 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000103606s
	[INFO] 10.244.0.11:35921 - 9718 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000125275s
	[INFO] 10.244.0.11:59016 - 43478 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081091s
	[INFO] 10.244.0.11:59016 - 43034 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080106s
	[INFO] 10.244.0.11:54541 - 23156 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001326648s
	[INFO] 10.244.0.11:54541 - 22992 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001358697s
	[INFO] 10.244.0.11:43268 - 38611 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000156479s
	[INFO] 10.244.0.11:43268 - 38183 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000100939s
	[INFO] 10.244.0.21:58600 - 13277 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000215663s
	[INFO] 10.244.0.21:38237 - 30403 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00011145s
	[INFO] 10.244.0.21:54851 - 62294 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108939s
	[INFO] 10.244.0.21:48090 - 58170 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129837s
	[INFO] 10.244.0.21:41642 - 55017 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095753s
	[INFO] 10.244.0.21:43899 - 53944 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000100914s
	[INFO] 10.244.0.21:32818 - 60481 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001498465s
	[INFO] 10.244.0.21:45500 - 554 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002193158s
	[INFO] 10.244.0.21:38362 - 21090 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002937607s
	[INFO] 10.244.0.21:45280 - 42672 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.00259401s
	
	
	==> describe nodes <==
	Name:               addons-303264
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-303264
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=addons-303264
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_32_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-303264
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-303264"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:32:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-303264
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:35:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:35:01 +0000   Thu, 16 Oct 2025 18:32:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:35:01 +0000   Thu, 16 Oct 2025 18:32:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:35:01 +0000   Thu, 16 Oct 2025 18:32:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 18:35:01 +0000   Thu, 16 Oct 2025 18:33:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-303264
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                07b2a673-6498-471b-80f5-89e4ac06aded
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-86bd5cbb97-jl554      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  gadget                      gadget-xkdv7                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  gcp-auth                    gcp-auth-78565c9fb4-7stxd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-l5ks7    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m10s
	  kube-system                 coredns-66bc5c9577-8ztvw                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m16s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 csi-hostpathplugin-5z9bs                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 etcd-addons-303264                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m22s
	  kube-system                 kindnet-mbblc                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m16s
	  kube-system                 kube-apiserver-addons-303264                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-addons-303264        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-proxy-vfskf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-scheduler-addons-303264                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 metrics-server-85b7d694d7-2pqhh              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m11s
	  kube-system                 nvidia-device-plugin-daemonset-frsg8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 registry-6b586f9694-tt65k                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 registry-creds-764b6fb674-25wdq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 registry-proxy-jktvf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 snapshot-controller-7d9fbc56b8-7ncgr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 snapshot-controller-7d9fbc56b8-9gxlc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  local-path-storage          local-path-provisioner-648f6765c9-jzvjp      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-qzhjz               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m15s  kube-proxy       
	  Normal   Starting                 2m22s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m22s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m22s  kubelet          Node addons-303264 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m22s  kubelet          Node addons-303264 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m22s  kubelet          Node addons-303264 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m18s  node-controller  Node addons-303264 event: Registered Node addons-303264 in Controller
	  Normal   NodeReady                95s    kubelet          Node addons-303264 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct16 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015294] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510048] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035217] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.777829] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.353148] kauditd_printk_skb: 36 callbacks suppressed
	[Oct16 17:39] FS-Cache: Duplicate cookie detected
	[  +0.000746] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001056] FS-Cache: O-cookie d=00000000a1708097{9P.session} n=00000000c48db394
	[  +0.001150] FS-Cache: O-key=[10] '34323935323233313231'
	[  +0.000794] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000971] FS-Cache: N-cookie d=00000000a1708097{9P.session} n=0000000008f2874d
	[  +0.001104] FS-Cache: N-key=[10] '34323935323233313231'
	[Oct16 17:40] hrtimer: interrupt took 46683506 ns
	[Oct16 18:30] kauditd_printk_skb: 8 callbacks suppressed
	[Oct16 18:32] overlayfs: idmapped layers are currently not supported
	[  +0.067059] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [014826c0f016dd10054a3e938e96ca2dc16e3da7c51ac716d64785bc10883c23] <==
	{"level":"warn","ts":"2025-10-16T18:32:54.744477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.768901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.779071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.795833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.819860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.835636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.860082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.877943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.898609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.916947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.934554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.945905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.970804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.987703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:54.997909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:55.045851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:55.068366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:55.084225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:32:55.188126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:33:10.964842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:33:10.979245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:33:32.991801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:33:33.006276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:33:33.026805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:33:33.049454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38996","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [ed8f5ff4c7d2466aa759c52b8296a84524de7ba1e817213c099710ad380d71ef] <==
	2025/10/16 18:35:01 GCP Auth Webhook started!
	2025/10/16 18:35:07 Ready to marshal response ...
	2025/10/16 18:35:07 Ready to write response ...
	2025/10/16 18:35:08 Ready to marshal response ...
	2025/10/16 18:35:08 Ready to write response ...
	2025/10/16 18:35:08 Ready to marshal response ...
	2025/10/16 18:35:08 Ready to write response ...
	
	
	==> kernel <==
	 18:35:21 up  1:17,  0 user,  load average: 1.77, 2.51, 3.07
	Linux addons-303264 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3478855350e27312631cd476f6eb2db3e964996f54f9f6f384b530804abbc3ad] <==
	E1016 18:33:35.209590       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1016 18:33:35.209711       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1016 18:33:36.737606       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 18:33:36.737724       1 metrics.go:72] Registering metrics
	I1016 18:33:36.737789       1 controller.go:711] "Syncing nftables rules"
	I1016 18:33:45.137456       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:33:45.137503       1 main.go:301] handling current node
	I1016 18:33:55.136665       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:33:55.136702       1 main.go:301] handling current node
	I1016 18:34:05.136603       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:34:05.136954       1 main.go:301] handling current node
	I1016 18:34:15.137055       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:34:15.137090       1 main.go:301] handling current node
	I1016 18:34:25.137368       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:34:25.137396       1 main.go:301] handling current node
	I1016 18:34:35.136543       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:34:35.136580       1 main.go:301] handling current node
	I1016 18:34:45.144828       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:34:45.144868       1 main.go:301] handling current node
	I1016 18:34:55.137278       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:34:55.137316       1 main.go:301] handling current node
	I1016 18:35:05.136808       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:35:05.136843       1 main.go:301] handling current node
	I1016 18:35:15.136482       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:35:15.136610       1 main.go:301] handling current node
	
	
	==> kube-apiserver [060c04d69de0bc184bc8f947999dbdc731a26bde67d27b5ccc7d12c5160d6872] <==
	I1016 18:33:10.537509       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.99.228.9"}
	W1016 18:33:10.963814       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1016 18:33:10.979004       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1016 18:33:13.864130       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.106.7.145"}
	W1016 18:33:32.991255       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1016 18:33:33.005067       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1016 18:33:33.026234       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1016 18:33:33.048819       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1016 18:33:45.668065       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.7.145:443: connect: connection refused
	E1016 18:33:45.668200       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.7.145:443: connect: connection refused" logger="UnhandledError"
	W1016 18:33:45.668873       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.7.145:443: connect: connection refused
	E1016 18:33:45.669007       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.7.145:443: connect: connection refused" logger="UnhandledError"
	W1016 18:33:45.763989       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.7.145:443: connect: connection refused
	E1016 18:33:45.765171       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.7.145:443: connect: connection refused" logger="UnhandledError"
	E1016 18:34:03.234820       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.211.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.211.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.211.125:443: connect: connection refused" logger="UnhandledError"
	W1016 18:34:03.235032       1 handler_proxy.go:99] no RequestInfo found in the context
	E1016 18:34:03.235088       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1016 18:34:03.351553       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1016 18:34:03.407243       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1016 18:35:18.496793       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60958: use of closed network connection
	E1016 18:35:18.729646       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60990: use of closed network connection
	E1016 18:35:18.855817       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:32786: use of closed network connection
	
	
	==> kube-controller-manager [b9c25f79f72e12553a80f8e56a83533f0c92695295a4c2fefe60d0d43ea83f8c] <==
	I1016 18:33:02.985767       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 18:33:02.990000       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 18:33:02.990381       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 18:33:03.007550       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1016 18:33:03.016543       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1016 18:33:03.016668       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1016 18:33:03.016729       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1016 18:33:03.016924       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1016 18:33:03.017614       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1016 18:33:03.019248       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1016 18:33:03.019270       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1016 18:33:03.023556       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1016 18:33:03.028420       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1016 18:33:09.336208       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1016 18:33:32.979898       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1016 18:33:32.984046       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	E1016 18:33:33.034254       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1016 18:33:33.034388       1 shared_informer.go:682] "Warning: resync period is smaller than resync check period and the informer has already started. Changing it to the resync check period" resyncPeriod="19h10m34.188859875s" resyncCheckPeriod="19h55m27.132189845s"
	I1016 18:33:33.034423       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1016 18:33:33.034475       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1016 18:33:33.034502       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:33:33.085280       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 18:33:47.964592       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1016 18:34:03.039632       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1016 18:34:03.113755       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [a43557a0c460383dd11dbc546a8b05c541e5a54ece4dec48717534f0976d5b55] <==
	I1016 18:33:05.094209       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:33:05.195664       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:33:05.296281       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:33:05.296321       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1016 18:33:05.296387       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:33:05.337517       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:33:05.337570       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:33:05.356618       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:33:05.356949       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:33:05.356966       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:33:05.362643       1 config.go:200] "Starting service config controller"
	I1016 18:33:05.362663       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:33:05.362680       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:33:05.362685       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:33:05.362695       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:33:05.362703       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:33:05.363337       1 config.go:309] "Starting node config controller"
	I1016 18:33:05.363345       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:33:05.363351       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:33:05.463749       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 18:33:05.463783       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 18:33:05.463818       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2f7b424d8bee40bd1f116496f34f26e561c275a27e0ae071483edcb822d76d67] <==
	I1016 18:32:57.061981       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:32:57.066505       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 18:32:57.066631       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:32:57.066654       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:32:57.066671       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1016 18:32:57.074960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 18:32:57.077756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 18:32:57.077841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 18:32:57.077895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 18:32:57.077957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 18:32:57.078010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 18:32:57.078073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 18:32:57.081372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1016 18:32:57.081689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 18:32:57.081748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 18:32:57.081909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 18:32:57.081964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:32:57.082007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 18:32:57.082055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 18:32:57.082102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 18:32:57.082144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 18:32:57.082201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 18:32:57.082294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 18:32:57.083381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1016 18:32:58.067533       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 18:34:36 addons-303264 kubelet[1275]: I1016 18:34:36.249957    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-jktvf" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 18:34:36 addons-303264 kubelet[1275]: I1016 18:34:36.269381    1275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpath-attacher-0" podStartSLOduration=37.081141524 podStartE2EDuration="1m26.269359728s" podCreationTimestamp="2025-10-16 18:33:10 +0000 UTC" firstStartedPulling="2025-10-16 18:33:46.728149681 +0000 UTC m=+48.171985720" lastFinishedPulling="2025-10-16 18:34:35.916367884 +0000 UTC m=+97.360203924" observedRunningTime="2025-10-16 18:34:36.268641865 +0000 UTC m=+97.712477913" watchObservedRunningTime="2025-10-16 18:34:36.269359728 +0000 UTC m=+97.713195768"
	Oct 16 18:34:39 addons-303264 kubelet[1275]: I1016 18:34:39.633542    1275 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc8cb\" (UniqueName: \"kubernetes.io/projected/ddcaffca-cae0-489e-abe5-0532344fabc9-kube-api-access-kc8cb\") pod \"ddcaffca-cae0-489e-abe5-0532344fabc9\" (UID: \"ddcaffca-cae0-489e-abe5-0532344fabc9\") "
	Oct 16 18:34:39 addons-303264 kubelet[1275]: I1016 18:34:39.640126    1275 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddcaffca-cae0-489e-abe5-0532344fabc9-kube-api-access-kc8cb" (OuterVolumeSpecName: "kube-api-access-kc8cb") pod "ddcaffca-cae0-489e-abe5-0532344fabc9" (UID: "ddcaffca-cae0-489e-abe5-0532344fabc9"). InnerVolumeSpecName "kube-api-access-kc8cb". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 16 18:34:39 addons-303264 kubelet[1275]: I1016 18:34:39.735318    1275 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kc8cb\" (UniqueName: \"kubernetes.io/projected/ddcaffca-cae0-489e-abe5-0532344fabc9-kube-api-access-kc8cb\") on node \"addons-303264\" DevicePath \"\""
	Oct 16 18:34:40 addons-303264 kubelet[1275]: I1016 18:34:40.273020    1275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbe6e0288ec33c820b4f511220284d7374149b5791aa7658edce4c056d531705"
	Oct 16 18:34:41 addons-303264 kubelet[1275]: I1016 18:34:41.302147    1275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-xkdv7" podStartSLOduration=67.034495147 podStartE2EDuration="1m32.302127045s" podCreationTimestamp="2025-10-16 18:33:09 +0000 UTC" firstStartedPulling="2025-10-16 18:34:14.959588012 +0000 UTC m=+76.403424060" lastFinishedPulling="2025-10-16 18:34:40.22721991 +0000 UTC m=+101.671055958" observedRunningTime="2025-10-16 18:34:41.301680541 +0000 UTC m=+102.745516589" watchObservedRunningTime="2025-10-16 18:34:41.302127045 +0000 UTC m=+102.745963085"
	Oct 16 18:34:42 addons-303264 kubelet[1275]: I1016 18:34:42.683995    1275 scope.go:117] "RemoveContainer" containerID="471b03917051589c921393e5cedbfc98d7c6a1b6ffbf46cf6ab557a4d7530aa2"
	Oct 16 18:34:42 addons-303264 kubelet[1275]: E1016 18:34:42.684178    1275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"patch\" with CrashLoopBackOff: \"back-off 20s restarting failed container=patch pod=gcp-auth-certs-patch-qcnqp_gcp-auth(74208f7b-d604-412e-b5be-cf2eed1ba93a)\"" pod="gcp-auth/gcp-auth-certs-patch-qcnqp" podUID="74208f7b-d604-412e-b5be-cf2eed1ba93a"
	Oct 16 18:34:49 addons-303264 kubelet[1275]: I1016 18:34:49.400548    1275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-l5ks7" podStartSLOduration=68.933678938 podStartE2EDuration="1m39.400531202s" podCreationTimestamp="2025-10-16 18:33:10 +0000 UTC" firstStartedPulling="2025-10-16 18:34:17.878501466 +0000 UTC m=+79.322337506" lastFinishedPulling="2025-10-16 18:34:48.345353722 +0000 UTC m=+109.789189770" observedRunningTime="2025-10-16 18:34:49.399813453 +0000 UTC m=+110.843649493" watchObservedRunningTime="2025-10-16 18:34:49.400531202 +0000 UTC m=+110.844367242"
	Oct 16 18:34:49 addons-303264 kubelet[1275]: E1016 18:34:49.648888    1275 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 16 18:34:49 addons-303264 kubelet[1275]: E1016 18:34:49.648986    1275 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2264cbde-5cda-424e-8a82-3fc4b7eeafe2-gcr-creds podName:2264cbde-5cda-424e-8a82-3fc4b7eeafe2 nodeName:}" failed. No retries permitted until 2025-10-16 18:35:53.648966458 +0000 UTC m=+175.092802498 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/2264cbde-5cda-424e-8a82-3fc4b7eeafe2-gcr-creds") pod "registry-creds-764b6fb674-25wdq" (UID: "2264cbde-5cda-424e-8a82-3fc4b7eeafe2") : secret "registry-creds-gcr" not found
	Oct 16 18:34:50 addons-303264 kubelet[1275]: I1016 18:34:50.907088    1275 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 16 18:34:50 addons-303264 kubelet[1275]: I1016 18:34:50.907149    1275 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 16 18:34:53 addons-303264 kubelet[1275]: I1016 18:34:53.683818    1275 scope.go:117] "RemoveContainer" containerID="471b03917051589c921393e5cedbfc98d7c6a1b6ffbf46cf6ab557a4d7530aa2"
	Oct 16 18:34:54 addons-303264 kubelet[1275]: I1016 18:34:54.394287    1275 scope.go:117] "RemoveContainer" containerID="471b03917051589c921393e5cedbfc98d7c6a1b6ffbf46cf6ab557a4d7530aa2"
	Oct 16 18:34:55 addons-303264 kubelet[1275]: I1016 18:34:55.490121    1275 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv4r8\" (UniqueName: \"kubernetes.io/projected/74208f7b-d604-412e-b5be-cf2eed1ba93a-kube-api-access-zv4r8\") pod \"74208f7b-d604-412e-b5be-cf2eed1ba93a\" (UID: \"74208f7b-d604-412e-b5be-cf2eed1ba93a\") "
	Oct 16 18:34:55 addons-303264 kubelet[1275]: I1016 18:34:55.496020    1275 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74208f7b-d604-412e-b5be-cf2eed1ba93a-kube-api-access-zv4r8" (OuterVolumeSpecName: "kube-api-access-zv4r8") pod "74208f7b-d604-412e-b5be-cf2eed1ba93a" (UID: "74208f7b-d604-412e-b5be-cf2eed1ba93a"). InnerVolumeSpecName "kube-api-access-zv4r8". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 16 18:34:55 addons-303264 kubelet[1275]: I1016 18:34:55.591832    1275 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zv4r8\" (UniqueName: \"kubernetes.io/projected/74208f7b-d604-412e-b5be-cf2eed1ba93a-kube-api-access-zv4r8\") on node \"addons-303264\" DevicePath \"\""
	Oct 16 18:34:56 addons-303264 kubelet[1275]: I1016 18:34:56.404609    1275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1961814a76c6d81fc85ea3b34f57c1c4c6067562de5091ae0a22baaad2573a63"
	Oct 16 18:35:05 addons-303264 kubelet[1275]: I1016 18:35:05.502384    1275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-7stxd" podStartSLOduration=101.101557921 podStartE2EDuration="1m52.502361218s" podCreationTimestamp="2025-10-16 18:33:13 +0000 UTC" firstStartedPulling="2025-10-16 18:34:50.061879916 +0000 UTC m=+111.505715956" lastFinishedPulling="2025-10-16 18:35:01.462683205 +0000 UTC m=+122.906519253" observedRunningTime="2025-10-16 18:35:02.470311064 +0000 UTC m=+123.914147112" watchObservedRunningTime="2025-10-16 18:35:05.502361218 +0000 UTC m=+126.946197258"
	Oct 16 18:35:08 addons-303264 kubelet[1275]: I1016 18:35:08.270988    1275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-5z9bs" podStartSLOduration=4.525338791 podStartE2EDuration="1m23.270959009s" podCreationTimestamp="2025-10-16 18:33:45 +0000 UTC" firstStartedPulling="2025-10-16 18:33:46.658995043 +0000 UTC m=+48.102831083" lastFinishedPulling="2025-10-16 18:35:05.404615253 +0000 UTC m=+126.848451301" observedRunningTime="2025-10-16 18:35:05.50286342 +0000 UTC m=+126.946699493" watchObservedRunningTime="2025-10-16 18:35:08.270959009 +0000 UTC m=+129.714795049"
	Oct 16 18:35:08 addons-303264 kubelet[1275]: I1016 18:35:08.306994    1275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2399eb6b-0b70-4a46-acca-4929071138df-gcp-creds\") pod \"busybox\" (UID: \"2399eb6b-0b70-4a46-acca-4929071138df\") " pod="default/busybox"
	Oct 16 18:35:08 addons-303264 kubelet[1275]: I1016 18:35:08.307218    1275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf78c\" (UniqueName: \"kubernetes.io/projected/2399eb6b-0b70-4a46-acca-4929071138df-kube-api-access-cf78c\") pod \"busybox\" (UID: \"2399eb6b-0b70-4a46-acca-4929071138df\") " pod="default/busybox"
	Oct 16 18:35:10 addons-303264 kubelet[1275]: I1016 18:35:10.686589    1275 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddcaffca-cae0-489e-abe5-0532344fabc9" path="/var/lib/kubelet/pods/ddcaffca-cae0-489e-abe5-0532344fabc9/volumes"
	
	
	==> storage-provisioner [a11803eed98f15ecf4cde77e7c2e9a9c4a51e24bf968cd172db10b9cb9173b34] <==
	W1016 18:34:55.327762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:34:57.331698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:34:57.336984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:34:59.339780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:34:59.345511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:35:01.349324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:35:01.357993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:35:03.367882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:35:03.374473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:35:05.378068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:35:05.383287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:35:07.386202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:35:07.391320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:35:09.395549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:35:09.401387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:35:11.404929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:35:11.409418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:35:13.412008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:35:13.418775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:35:15.421316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:35:15.425436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:35:17.429365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:35:17.436047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:35:19.443827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:35:19.464651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-303264 -n addons-303264
helpers_test.go:269: (dbg) Run:  kubectl --context addons-303264 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: gcp-auth-certs-patch-qcnqp ingress-nginx-admission-create-j7q4k ingress-nginx-admission-patch-ndrbx registry-creds-764b6fb674-25wdq
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-303264 describe pod gcp-auth-certs-patch-qcnqp ingress-nginx-admission-create-j7q4k ingress-nginx-admission-patch-ndrbx registry-creds-764b6fb674-25wdq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-303264 describe pod gcp-auth-certs-patch-qcnqp ingress-nginx-admission-create-j7q4k ingress-nginx-admission-patch-ndrbx registry-creds-764b6fb674-25wdq: exit status 1 (91.25249ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-qcnqp" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-j7q4k" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ndrbx" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-25wdq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-303264 describe pod gcp-auth-certs-patch-qcnqp ingress-nginx-admission-create-j7q4k ingress-nginx-admission-patch-ndrbx registry-creds-764b6fb674-25wdq: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-303264 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-303264 addons disable headlamp --alsologtostderr -v=1: exit status 11 (262.979097ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:35:22.158461  297720 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:35:22.159279  297720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:35:22.159323  297720 out.go:374] Setting ErrFile to fd 2...
	I1016 18:35:22.159348  297720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:35:22.159668  297720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:35:22.160012  297720 mustload.go:65] Loading cluster: addons-303264
	I1016 18:35:22.160732  297720 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:35:22.160759  297720 addons.go:606] checking whether the cluster is paused
	I1016 18:35:22.160879  297720 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:35:22.160905  297720 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:35:22.161430  297720 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:35:22.180376  297720 ssh_runner.go:195] Run: systemctl --version
	I1016 18:35:22.180452  297720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:35:22.199395  297720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:35:22.303833  297720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:35:22.303918  297720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:35:22.334752  297720 cri.go:89] found id: "4c854724ff606415b7b4cd338cfdcba4ae2ed9545913afdbe77836a55f682630"
	I1016 18:35:22.334773  297720 cri.go:89] found id: "72c450061ca944aebcf21ba44cd0fb5c6faba231d5c3510d405f852f8c576446"
	I1016 18:35:22.334779  297720 cri.go:89] found id: "d3c44cd5669c90a23e68ca072b42ce384a3f474528fe2c9af093fd29c7c3fa1b"
	I1016 18:35:22.334783  297720 cri.go:89] found id: "2fd75860dad3eccbd0d79a17732d30758bd9d2456a835178445c635cbb925a8a"
	I1016 18:35:22.334787  297720 cri.go:89] found id: "b85fa5b248e27a71c1f12a3be974d1bdda3b4469c81daef49b7cfde0ffea797c"
	I1016 18:35:22.334791  297720 cri.go:89] found id: "817135be1fb1204992d3db557da6db2ccace5f73a469e16e6ef4a8d3a6538646"
	I1016 18:35:22.334794  297720 cri.go:89] found id: "cc0546bd9d12ac9715ff397c9b06b4fc5d1b8028491ba478a088e6e88b40010f"
	I1016 18:35:22.334797  297720 cri.go:89] found id: "83e350274adee6aabe6699937b3ee1da677b23930fb3f6a320244186014dc182"
	I1016 18:35:22.334800  297720 cri.go:89] found id: "4d4a9d8e6117902f1f0822f15f29b21a249dfee058117ef45732ff0ebbc9b63c"
	I1016 18:35:22.334807  297720 cri.go:89] found id: "54a940e28a47407c8dd3c7ff37cedcc6661f35e7010edab0a32f554dcebca95e"
	I1016 18:35:22.334810  297720 cri.go:89] found id: "ddb9eebdec6b1a8e687257395e11e928406b35550fba6ed6e91af596e7585f32"
	I1016 18:35:22.334813  297720 cri.go:89] found id: "42b57482939e2fd5f76685af64bbdfb293bceb35482b2bdc733c1573a63ac270"
	I1016 18:35:22.334821  297720 cri.go:89] found id: "a1df688b216b826cd54cb112e3dad71b1e97ae8c966ef26ed5c8ef3dd4b29aaa"
	I1016 18:35:22.334828  297720 cri.go:89] found id: "8049d0179c2ce30d32ea7f0beab524406581715f6d4f201e8e1f342170d48791"
	I1016 18:35:22.334831  297720 cri.go:89] found id: "2f9a34f263e49dc31cf9dc01ff9a56ba8c02307a08be02085e5ebc86366593ef"
	I1016 18:35:22.334836  297720 cri.go:89] found id: "a11803eed98f15ecf4cde77e7c2e9a9c4a51e24bf968cd172db10b9cb9173b34"
	I1016 18:35:22.334843  297720 cri.go:89] found id: "2150dbabd80c70b27e2ffa366b6a76822ac0da6532eef17cae4daccd51271b0b"
	I1016 18:35:22.334846  297720 cri.go:89] found id: "a43557a0c460383dd11dbc546a8b05c541e5a54ece4dec48717534f0976d5b55"
	I1016 18:35:22.334850  297720 cri.go:89] found id: "3478855350e27312631cd476f6eb2db3e964996f54f9f6f384b530804abbc3ad"
	I1016 18:35:22.334853  297720 cri.go:89] found id: "2f7b424d8bee40bd1f116496f34f26e561c275a27e0ae071483edcb822d76d67"
	I1016 18:35:22.334857  297720 cri.go:89] found id: "060c04d69de0bc184bc8f947999dbdc731a26bde67d27b5ccc7d12c5160d6872"
	I1016 18:35:22.334860  297720 cri.go:89] found id: "b9c25f79f72e12553a80f8e56a83533f0c92695295a4c2fefe60d0d43ea83f8c"
	I1016 18:35:22.334863  297720 cri.go:89] found id: "014826c0f016dd10054a3e938e96ca2dc16e3da7c51ac716d64785bc10883c23"
	I1016 18:35:22.334866  297720 cri.go:89] found id: ""
	I1016 18:35:22.334916  297720 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:35:22.349627  297720 out.go:203] 
	W1016 18:35:22.352680  297720 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:35:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:35:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:35:22.352720  297720 out.go:285] * 
	* 
	W1016 18:35:22.359033  297720 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:35:22.362011  297720 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-303264 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.23s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-jl554" [02b377f8-c8f4-482f-8efe-9869ee65af42] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003273216s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-303264 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-303264 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (289.736819ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:35:41.271264  298189 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:35:41.272132  298189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:35:41.272151  298189 out.go:374] Setting ErrFile to fd 2...
	I1016 18:35:41.272157  298189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:35:41.272433  298189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:35:41.272758  298189 mustload.go:65] Loading cluster: addons-303264
	I1016 18:35:41.273176  298189 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:35:41.273201  298189 addons.go:606] checking whether the cluster is paused
	I1016 18:35:41.273314  298189 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:35:41.273337  298189 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:35:41.273809  298189 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:35:41.297631  298189 ssh_runner.go:195] Run: systemctl --version
	I1016 18:35:41.297699  298189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:35:41.318673  298189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:35:41.420967  298189 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:35:41.421097  298189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:35:41.473122  298189 cri.go:89] found id: "4c854724ff606415b7b4cd338cfdcba4ae2ed9545913afdbe77836a55f682630"
	I1016 18:35:41.473154  298189 cri.go:89] found id: "72c450061ca944aebcf21ba44cd0fb5c6faba231d5c3510d405f852f8c576446"
	I1016 18:35:41.473164  298189 cri.go:89] found id: "d3c44cd5669c90a23e68ca072b42ce384a3f474528fe2c9af093fd29c7c3fa1b"
	I1016 18:35:41.473168  298189 cri.go:89] found id: "2fd75860dad3eccbd0d79a17732d30758bd9d2456a835178445c635cbb925a8a"
	I1016 18:35:41.473172  298189 cri.go:89] found id: "b85fa5b248e27a71c1f12a3be974d1bdda3b4469c81daef49b7cfde0ffea797c"
	I1016 18:35:41.473176  298189 cri.go:89] found id: "817135be1fb1204992d3db557da6db2ccace5f73a469e16e6ef4a8d3a6538646"
	I1016 18:35:41.473179  298189 cri.go:89] found id: "cc0546bd9d12ac9715ff397c9b06b4fc5d1b8028491ba478a088e6e88b40010f"
	I1016 18:35:41.473182  298189 cri.go:89] found id: "83e350274adee6aabe6699937b3ee1da677b23930fb3f6a320244186014dc182"
	I1016 18:35:41.473185  298189 cri.go:89] found id: "4d4a9d8e6117902f1f0822f15f29b21a249dfee058117ef45732ff0ebbc9b63c"
	I1016 18:35:41.473191  298189 cri.go:89] found id: "54a940e28a47407c8dd3c7ff37cedcc6661f35e7010edab0a32f554dcebca95e"
	I1016 18:35:41.473195  298189 cri.go:89] found id: "ddb9eebdec6b1a8e687257395e11e928406b35550fba6ed6e91af596e7585f32"
	I1016 18:35:41.473198  298189 cri.go:89] found id: "42b57482939e2fd5f76685af64bbdfb293bceb35482b2bdc733c1573a63ac270"
	I1016 18:35:41.473201  298189 cri.go:89] found id: "a1df688b216b826cd54cb112e3dad71b1e97ae8c966ef26ed5c8ef3dd4b29aaa"
	I1016 18:35:41.473203  298189 cri.go:89] found id: "8049d0179c2ce30d32ea7f0beab524406581715f6d4f201e8e1f342170d48791"
	I1016 18:35:41.473207  298189 cri.go:89] found id: "2f9a34f263e49dc31cf9dc01ff9a56ba8c02307a08be02085e5ebc86366593ef"
	I1016 18:35:41.473212  298189 cri.go:89] found id: "a11803eed98f15ecf4cde77e7c2e9a9c4a51e24bf968cd172db10b9cb9173b34"
	I1016 18:35:41.473220  298189 cri.go:89] found id: "2150dbabd80c70b27e2ffa366b6a76822ac0da6532eef17cae4daccd51271b0b"
	I1016 18:35:41.473223  298189 cri.go:89] found id: "a43557a0c460383dd11dbc546a8b05c541e5a54ece4dec48717534f0976d5b55"
	I1016 18:35:41.473227  298189 cri.go:89] found id: "3478855350e27312631cd476f6eb2db3e964996f54f9f6f384b530804abbc3ad"
	I1016 18:35:41.473230  298189 cri.go:89] found id: "2f7b424d8bee40bd1f116496f34f26e561c275a27e0ae071483edcb822d76d67"
	I1016 18:35:41.473235  298189 cri.go:89] found id: "060c04d69de0bc184bc8f947999dbdc731a26bde67d27b5ccc7d12c5160d6872"
	I1016 18:35:41.473238  298189 cri.go:89] found id: "b9c25f79f72e12553a80f8e56a83533f0c92695295a4c2fefe60d0d43ea83f8c"
	I1016 18:35:41.473241  298189 cri.go:89] found id: "014826c0f016dd10054a3e938e96ca2dc16e3da7c51ac716d64785bc10883c23"
	I1016 18:35:41.473244  298189 cri.go:89] found id: ""
	I1016 18:35:41.473294  298189 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:35:41.490219  298189 out.go:203] 
	W1016 18:35:41.493285  298189 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:35:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:35:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:35:41.493309  298189 out.go:285] * 
	* 
	W1016 18:35:41.499635  298189 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:35:41.502968  298189 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-303264 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.30s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.65s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-303264 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-303264 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-303264 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [8c0f28a5-2713-4d84-b712-da302dace190] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [8c0f28a5-2713-4d84-b712-da302dace190] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [8c0f28a5-2713-4d84-b712-da302dace190] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003358526s
addons_test.go:967: (dbg) Run:  kubectl --context addons-303264 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-303264 ssh "cat /opt/local-path-provisioner/pvc-7f6b91b3-738c-4521-a1e3-e30bb8ace15b_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-303264 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-303264 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-303264 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-303264 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (341.663746ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:35:45.413554  298367 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:35:45.414383  298367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:35:45.414430  298367 out.go:374] Setting ErrFile to fd 2...
	I1016 18:35:45.414454  298367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:35:45.414805  298367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:35:45.415242  298367 mustload.go:65] Loading cluster: addons-303264
	I1016 18:35:45.415708  298367 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:35:45.415772  298367 addons.go:606] checking whether the cluster is paused
	I1016 18:35:45.415949  298367 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:35:45.416005  298367 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:35:45.417346  298367 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:35:45.442930  298367 ssh_runner.go:195] Run: systemctl --version
	I1016 18:35:45.443126  298367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:35:45.462415  298367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:35:45.579833  298367 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:35:45.579930  298367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:35:45.622083  298367 cri.go:89] found id: "4c854724ff606415b7b4cd338cfdcba4ae2ed9545913afdbe77836a55f682630"
	I1016 18:35:45.622107  298367 cri.go:89] found id: "72c450061ca944aebcf21ba44cd0fb5c6faba231d5c3510d405f852f8c576446"
	I1016 18:35:45.622112  298367 cri.go:89] found id: "d3c44cd5669c90a23e68ca072b42ce384a3f474528fe2c9af093fd29c7c3fa1b"
	I1016 18:35:45.622116  298367 cri.go:89] found id: "2fd75860dad3eccbd0d79a17732d30758bd9d2456a835178445c635cbb925a8a"
	I1016 18:35:45.622120  298367 cri.go:89] found id: "b85fa5b248e27a71c1f12a3be974d1bdda3b4469c81daef49b7cfde0ffea797c"
	I1016 18:35:45.622124  298367 cri.go:89] found id: "817135be1fb1204992d3db557da6db2ccace5f73a469e16e6ef4a8d3a6538646"
	I1016 18:35:45.622127  298367 cri.go:89] found id: "cc0546bd9d12ac9715ff397c9b06b4fc5d1b8028491ba478a088e6e88b40010f"
	I1016 18:35:45.622130  298367 cri.go:89] found id: "83e350274adee6aabe6699937b3ee1da677b23930fb3f6a320244186014dc182"
	I1016 18:35:45.622136  298367 cri.go:89] found id: "4d4a9d8e6117902f1f0822f15f29b21a249dfee058117ef45732ff0ebbc9b63c"
	I1016 18:35:45.622141  298367 cri.go:89] found id: "54a940e28a47407c8dd3c7ff37cedcc6661f35e7010edab0a32f554dcebca95e"
	I1016 18:35:45.622145  298367 cri.go:89] found id: "ddb9eebdec6b1a8e687257395e11e928406b35550fba6ed6e91af596e7585f32"
	I1016 18:35:45.622148  298367 cri.go:89] found id: "42b57482939e2fd5f76685af64bbdfb293bceb35482b2bdc733c1573a63ac270"
	I1016 18:35:45.622151  298367 cri.go:89] found id: "a1df688b216b826cd54cb112e3dad71b1e97ae8c966ef26ed5c8ef3dd4b29aaa"
	I1016 18:35:45.622155  298367 cri.go:89] found id: "8049d0179c2ce30d32ea7f0beab524406581715f6d4f201e8e1f342170d48791"
	I1016 18:35:45.622158  298367 cri.go:89] found id: "2f9a34f263e49dc31cf9dc01ff9a56ba8c02307a08be02085e5ebc86366593ef"
	I1016 18:35:45.622163  298367 cri.go:89] found id: "a11803eed98f15ecf4cde77e7c2e9a9c4a51e24bf968cd172db10b9cb9173b34"
	I1016 18:35:45.622167  298367 cri.go:89] found id: "2150dbabd80c70b27e2ffa366b6a76822ac0da6532eef17cae4daccd51271b0b"
	I1016 18:35:45.622170  298367 cri.go:89] found id: "a43557a0c460383dd11dbc546a8b05c541e5a54ece4dec48717534f0976d5b55"
	I1016 18:35:45.622174  298367 cri.go:89] found id: "3478855350e27312631cd476f6eb2db3e964996f54f9f6f384b530804abbc3ad"
	I1016 18:35:45.622177  298367 cri.go:89] found id: "2f7b424d8bee40bd1f116496f34f26e561c275a27e0ae071483edcb822d76d67"
	I1016 18:35:45.622182  298367 cri.go:89] found id: "060c04d69de0bc184bc8f947999dbdc731a26bde67d27b5ccc7d12c5160d6872"
	I1016 18:35:45.622185  298367 cri.go:89] found id: "b9c25f79f72e12553a80f8e56a83533f0c92695295a4c2fefe60d0d43ea83f8c"
	I1016 18:35:45.622188  298367 cri.go:89] found id: "014826c0f016dd10054a3e938e96ca2dc16e3da7c51ac716d64785bc10883c23"
	I1016 18:35:45.622191  298367 cri.go:89] found id: ""
	I1016 18:35:45.622244  298367 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:35:45.649413  298367 out.go:203] 
	W1016 18:35:45.652555  298367 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:35:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:35:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:35:45.652580  298367 out.go:285] * 
	* 
	W1016 18:35:45.658990  298367 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:35:45.662169  298367 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-303264 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.65s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.38s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-frsg8" [9b71f6fc-8aad-4d80-b73c-bc6df9bd0a6d] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004108816s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-303264 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-303264 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (370.80288ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:35:34.755255  297923 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:35:34.756044  297923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:35:34.756062  297923 out.go:374] Setting ErrFile to fd 2...
	I1016 18:35:34.756068  297923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:35:34.756362  297923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:35:34.756664  297923 mustload.go:65] Loading cluster: addons-303264
	I1016 18:35:34.757061  297923 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:35:34.757079  297923 addons.go:606] checking whether the cluster is paused
	I1016 18:35:34.757225  297923 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:35:34.757247  297923 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:35:34.757698  297923 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:35:34.793789  297923 ssh_runner.go:195] Run: systemctl --version
	I1016 18:35:34.793850  297923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:35:34.821362  297923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:35:34.944380  297923 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:35:34.944454  297923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:35:34.981469  297923 cri.go:89] found id: "4c854724ff606415b7b4cd338cfdcba4ae2ed9545913afdbe77836a55f682630"
	I1016 18:35:34.981491  297923 cri.go:89] found id: "72c450061ca944aebcf21ba44cd0fb5c6faba231d5c3510d405f852f8c576446"
	I1016 18:35:34.981496  297923 cri.go:89] found id: "d3c44cd5669c90a23e68ca072b42ce384a3f474528fe2c9af093fd29c7c3fa1b"
	I1016 18:35:34.981500  297923 cri.go:89] found id: "2fd75860dad3eccbd0d79a17732d30758bd9d2456a835178445c635cbb925a8a"
	I1016 18:35:34.981509  297923 cri.go:89] found id: "b85fa5b248e27a71c1f12a3be974d1bdda3b4469c81daef49b7cfde0ffea797c"
	I1016 18:35:34.981513  297923 cri.go:89] found id: "817135be1fb1204992d3db557da6db2ccace5f73a469e16e6ef4a8d3a6538646"
	I1016 18:35:34.981516  297923 cri.go:89] found id: "cc0546bd9d12ac9715ff397c9b06b4fc5d1b8028491ba478a088e6e88b40010f"
	I1016 18:35:34.981520  297923 cri.go:89] found id: "83e350274adee6aabe6699937b3ee1da677b23930fb3f6a320244186014dc182"
	I1016 18:35:34.981523  297923 cri.go:89] found id: "4d4a9d8e6117902f1f0822f15f29b21a249dfee058117ef45732ff0ebbc9b63c"
	I1016 18:35:34.981529  297923 cri.go:89] found id: "54a940e28a47407c8dd3c7ff37cedcc6661f35e7010edab0a32f554dcebca95e"
	I1016 18:35:34.981532  297923 cri.go:89] found id: "ddb9eebdec6b1a8e687257395e11e928406b35550fba6ed6e91af596e7585f32"
	I1016 18:35:34.981535  297923 cri.go:89] found id: "42b57482939e2fd5f76685af64bbdfb293bceb35482b2bdc733c1573a63ac270"
	I1016 18:35:34.981538  297923 cri.go:89] found id: "a1df688b216b826cd54cb112e3dad71b1e97ae8c966ef26ed5c8ef3dd4b29aaa"
	I1016 18:35:34.981541  297923 cri.go:89] found id: "8049d0179c2ce30d32ea7f0beab524406581715f6d4f201e8e1f342170d48791"
	I1016 18:35:34.981545  297923 cri.go:89] found id: "2f9a34f263e49dc31cf9dc01ff9a56ba8c02307a08be02085e5ebc86366593ef"
	I1016 18:35:34.981551  297923 cri.go:89] found id: "a11803eed98f15ecf4cde77e7c2e9a9c4a51e24bf968cd172db10b9cb9173b34"
	I1016 18:35:34.981554  297923 cri.go:89] found id: "2150dbabd80c70b27e2ffa366b6a76822ac0da6532eef17cae4daccd51271b0b"
	I1016 18:35:34.981558  297923 cri.go:89] found id: "a43557a0c460383dd11dbc546a8b05c541e5a54ece4dec48717534f0976d5b55"
	I1016 18:35:34.981561  297923 cri.go:89] found id: "3478855350e27312631cd476f6eb2db3e964996f54f9f6f384b530804abbc3ad"
	I1016 18:35:34.981565  297923 cri.go:89] found id: "2f7b424d8bee40bd1f116496f34f26e561c275a27e0ae071483edcb822d76d67"
	I1016 18:35:34.981569  297923 cri.go:89] found id: "060c04d69de0bc184bc8f947999dbdc731a26bde67d27b5ccc7d12c5160d6872"
	I1016 18:35:34.981575  297923 cri.go:89] found id: "b9c25f79f72e12553a80f8e56a83533f0c92695295a4c2fefe60d0d43ea83f8c"
	I1016 18:35:34.981579  297923 cri.go:89] found id: "014826c0f016dd10054a3e938e96ca2dc16e3da7c51ac716d64785bc10883c23"
	I1016 18:35:34.981582  297923 cri.go:89] found id: ""
	I1016 18:35:34.981693  297923 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:35:35.000154  297923 out.go:203] 
	W1016 18:35:35.003007  297923 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:35:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:35:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:35:35.003037  297923 out.go:285] * 
	* 
	W1016 18:35:35.010774  297923 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:35:35.014163  297923 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-303264 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.38s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-qzhjz" [3891d01f-6efa-4ef2-8e23-66739bc8843b] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003298717s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-303264 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-303264 addons disable yakd --alsologtostderr -v=1: exit status 11 (272.539301ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:35:28.427331  297781 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:35:28.428138  297781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:35:28.428158  297781 out.go:374] Setting ErrFile to fd 2...
	I1016 18:35:28.428163  297781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:35:28.428448  297781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:35:28.428760  297781 mustload.go:65] Loading cluster: addons-303264
	I1016 18:35:28.429206  297781 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:35:28.429224  297781 addons.go:606] checking whether the cluster is paused
	I1016 18:35:28.429332  297781 config.go:182] Loaded profile config "addons-303264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:35:28.429348  297781 host.go:66] Checking if "addons-303264" exists ...
	I1016 18:35:28.429802  297781 cli_runner.go:164] Run: docker container inspect addons-303264 --format={{.State.Status}}
	I1016 18:35:28.448171  297781 ssh_runner.go:195] Run: systemctl --version
	I1016 18:35:28.448242  297781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-303264
	I1016 18:35:28.467825  297781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/addons-303264/id_rsa Username:docker}
	I1016 18:35:28.571913  297781 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:35:28.572010  297781 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:35:28.609171  297781 cri.go:89] found id: "4c854724ff606415b7b4cd338cfdcba4ae2ed9545913afdbe77836a55f682630"
	I1016 18:35:28.609210  297781 cri.go:89] found id: "72c450061ca944aebcf21ba44cd0fb5c6faba231d5c3510d405f852f8c576446"
	I1016 18:35:28.609217  297781 cri.go:89] found id: "d3c44cd5669c90a23e68ca072b42ce384a3f474528fe2c9af093fd29c7c3fa1b"
	I1016 18:35:28.609221  297781 cri.go:89] found id: "2fd75860dad3eccbd0d79a17732d30758bd9d2456a835178445c635cbb925a8a"
	I1016 18:35:28.609224  297781 cri.go:89] found id: "b85fa5b248e27a71c1f12a3be974d1bdda3b4469c81daef49b7cfde0ffea797c"
	I1016 18:35:28.609228  297781 cri.go:89] found id: "817135be1fb1204992d3db557da6db2ccace5f73a469e16e6ef4a8d3a6538646"
	I1016 18:35:28.609232  297781 cri.go:89] found id: "cc0546bd9d12ac9715ff397c9b06b4fc5d1b8028491ba478a088e6e88b40010f"
	I1016 18:35:28.609236  297781 cri.go:89] found id: "83e350274adee6aabe6699937b3ee1da677b23930fb3f6a320244186014dc182"
	I1016 18:35:28.609239  297781 cri.go:89] found id: "4d4a9d8e6117902f1f0822f15f29b21a249dfee058117ef45732ff0ebbc9b63c"
	I1016 18:35:28.609247  297781 cri.go:89] found id: "54a940e28a47407c8dd3c7ff37cedcc6661f35e7010edab0a32f554dcebca95e"
	I1016 18:35:28.609250  297781 cri.go:89] found id: "ddb9eebdec6b1a8e687257395e11e928406b35550fba6ed6e91af596e7585f32"
	I1016 18:35:28.609254  297781 cri.go:89] found id: "42b57482939e2fd5f76685af64bbdfb293bceb35482b2bdc733c1573a63ac270"
	I1016 18:35:28.609258  297781 cri.go:89] found id: "a1df688b216b826cd54cb112e3dad71b1e97ae8c966ef26ed5c8ef3dd4b29aaa"
	I1016 18:35:28.609262  297781 cri.go:89] found id: "8049d0179c2ce30d32ea7f0beab524406581715f6d4f201e8e1f342170d48791"
	I1016 18:35:28.609265  297781 cri.go:89] found id: "2f9a34f263e49dc31cf9dc01ff9a56ba8c02307a08be02085e5ebc86366593ef"
	I1016 18:35:28.609274  297781 cri.go:89] found id: "a11803eed98f15ecf4cde77e7c2e9a9c4a51e24bf968cd172db10b9cb9173b34"
	I1016 18:35:28.609278  297781 cri.go:89] found id: "2150dbabd80c70b27e2ffa366b6a76822ac0da6532eef17cae4daccd51271b0b"
	I1016 18:35:28.609287  297781 cri.go:89] found id: "a43557a0c460383dd11dbc546a8b05c541e5a54ece4dec48717534f0976d5b55"
	I1016 18:35:28.609290  297781 cri.go:89] found id: "3478855350e27312631cd476f6eb2db3e964996f54f9f6f384b530804abbc3ad"
	I1016 18:35:28.609298  297781 cri.go:89] found id: "2f7b424d8bee40bd1f116496f34f26e561c275a27e0ae071483edcb822d76d67"
	I1016 18:35:28.609308  297781 cri.go:89] found id: "060c04d69de0bc184bc8f947999dbdc731a26bde67d27b5ccc7d12c5160d6872"
	I1016 18:35:28.609311  297781 cri.go:89] found id: "b9c25f79f72e12553a80f8e56a83533f0c92695295a4c2fefe60d0d43ea83f8c"
	I1016 18:35:28.609315  297781 cri.go:89] found id: "014826c0f016dd10054a3e938e96ca2dc16e3da7c51ac716d64785bc10883c23"
	I1016 18:35:28.609318  297781 cri.go:89] found id: ""
	I1016 18:35:28.609383  297781 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:35:28.625446  297781 out.go:203] 
	W1016 18:35:28.628301  297781 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:35:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:35:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:35:28.628328  297781 out.go:285] * 
	* 
	W1016 18:35:28.634778  297781 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:35:28.637667  297781 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-303264 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-703623 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-703623 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-7z8qd" [ba5572e0-0a3e-44f1-a0e9-ed16be6ab525] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-703623 -n functional-703623
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-16 18:52:35.763339184 +0000 UTC m=+1307.596094192
functional_test.go:1645: (dbg) Run:  kubectl --context functional-703623 describe po hello-node-connect-7d85dfc575-7z8qd -n default
functional_test.go:1645: (dbg) kubectl --context functional-703623 describe po hello-node-connect-7d85dfc575-7z8qd -n default:
Name:             hello-node-connect-7d85dfc575-7z8qd
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-703623/192.168.49.2
Start Time:       Thu, 16 Oct 2025 18:42:35 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c4xp2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-c4xp2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-7z8qd to functional-703623
Normal   Pulling    6m59s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m59s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m59s (x5 over 10m)     kubelet            Error: ErrImagePull
Normal   BackOff    4m53s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m53s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-703623 logs hello-node-connect-7d85dfc575-7z8qd -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-703623 logs hello-node-connect-7d85dfc575-7z8qd -n default: exit status 1 (112.460754ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-7z8qd" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-703623 logs hello-node-connect-7d85dfc575-7z8qd -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-703623 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-7z8qd
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-703623/192.168.49.2
Start Time:       Thu, 16 Oct 2025 18:42:35 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c4xp2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-c4xp2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  10m                default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-7z8qd to functional-703623
Normal   Pulling    7m (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    1s (x43 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     1s (x43 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-703623 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-703623 logs -l app=hello-node-connect: exit status 1 (99.766818ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-7z8qd" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-703623 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-703623 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.233.255
IPs:                      10.109.233.255
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30965/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-703623
helpers_test.go:243: (dbg) docker inspect functional-703623:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c2a8895279017ea07c1e6a2ee1f7fa5225aaf2ea587c37d0851df8b5059f4e6d",
	        "Created": "2025-10-16T18:39:29.217010333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 306017,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:39:29.280884391Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/c2a8895279017ea07c1e6a2ee1f7fa5225aaf2ea587c37d0851df8b5059f4e6d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c2a8895279017ea07c1e6a2ee1f7fa5225aaf2ea587c37d0851df8b5059f4e6d/hostname",
	        "HostsPath": "/var/lib/docker/containers/c2a8895279017ea07c1e6a2ee1f7fa5225aaf2ea587c37d0851df8b5059f4e6d/hosts",
	        "LogPath": "/var/lib/docker/containers/c2a8895279017ea07c1e6a2ee1f7fa5225aaf2ea587c37d0851df8b5059f4e6d/c2a8895279017ea07c1e6a2ee1f7fa5225aaf2ea587c37d0851df8b5059f4e6d-json.log",
	        "Name": "/functional-703623",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-703623:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-703623",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c2a8895279017ea07c1e6a2ee1f7fa5225aaf2ea587c37d0851df8b5059f4e6d",
	                "LowerDir": "/var/lib/docker/overlay2/743309a2254a1bb32e486f35fd5a00376c65eb4676f027fc3c44c62a55c3e36a-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/743309a2254a1bb32e486f35fd5a00376c65eb4676f027fc3c44c62a55c3e36a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/743309a2254a1bb32e486f35fd5a00376c65eb4676f027fc3c44c62a55c3e36a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/743309a2254a1bb32e486f35fd5a00376c65eb4676f027fc3c44c62a55c3e36a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-703623",
	                "Source": "/var/lib/docker/volumes/functional-703623/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-703623",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-703623",
	                "name.minikube.sigs.k8s.io": "functional-703623",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d245d8ac28fc91a92340bceb906bdb4b04775f9c30785621a5608fc4812c5d0",
	            "SandboxKey": "/var/run/docker/netns/1d245d8ac28f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-703623": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:65:33:06:9c:a9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7e790765c00a3005f249d717701fa06595917be13576a00cf45396e840af349c",
	                    "EndpointID": "1e6c9e2aba9dbc9e4df0cc8d35c1d3e5869a097d7c71774132abb8d8749d4c9c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-703623",
	                        "c2a889527901"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-703623 -n functional-703623
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-703623 logs -n 25: (1.451839881s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-703623 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                   │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:41 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:41 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                       │ minikube          │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:41 UTC │
	│ kubectl │ functional-703623 kubectl -- --context functional-703623 get pods                                                         │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:41 UTC │
	│ start   │ -p functional-703623 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                  │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:41 UTC │ 16 Oct 25 18:42 UTC │
	│ service │ invalid-svc -p functional-703623                                                                                          │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │                     │
	│ config  │ functional-703623 config unset cpus                                                                                       │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:42 UTC │
	│ cp      │ functional-703623 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                        │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:42 UTC │
	│ config  │ functional-703623 config get cpus                                                                                         │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │                     │
	│ config  │ functional-703623 config set cpus 2                                                                                       │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:42 UTC │
	│ config  │ functional-703623 config get cpus                                                                                         │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:42 UTC │
	│ config  │ functional-703623 config unset cpus                                                                                       │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:42 UTC │
	│ ssh     │ functional-703623 ssh -n functional-703623 sudo cat /home/docker/cp-test.txt                                              │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:42 UTC │
	│ config  │ functional-703623 config get cpus                                                                                         │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │                     │
	│ ssh     │ functional-703623 ssh echo hello                                                                                          │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:42 UTC │
	│ cp      │ functional-703623 cp functional-703623:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd170038937/001/cp-test.txt │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:42 UTC │
	│ ssh     │ functional-703623 ssh cat /etc/hostname                                                                                   │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:42 UTC │
	│ ssh     │ functional-703623 ssh -n functional-703623 sudo cat /home/docker/cp-test.txt                                              │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:42 UTC │
	│ tunnel  │ functional-703623 tunnel --alsologtostderr                                                                                │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │                     │
	│ tunnel  │ functional-703623 tunnel --alsologtostderr                                                                                │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │                     │
	│ cp      │ functional-703623 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                 │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:42 UTC │
	│ ssh     │ functional-703623 ssh -n functional-703623 sudo cat /tmp/does/not/exist/cp-test.txt                                       │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:42 UTC │
	│ tunnel  │ functional-703623 tunnel --alsologtostderr                                                                                │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │                     │
	│ addons  │ functional-703623 addons list                                                                                             │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:42 UTC │
	│ addons  │ functional-703623 addons list -o json                                                                                     │ functional-703623 │ jenkins │ v1.37.0 │ 16 Oct 25 18:42 UTC │ 16 Oct 25 18:42 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:41:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:41:21.322742  310168 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:41:21.322909  310168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:41:21.322914  310168 out.go:374] Setting ErrFile to fd 2...
	I1016 18:41:21.322918  310168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:41:21.323200  310168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:41:21.323566  310168 out.go:368] Setting JSON to false
	I1016 18:41:21.324511  310168 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5011,"bootTime":1760635071,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 18:41:21.324568  310168 start.go:141] virtualization:  
	I1016 18:41:21.328068  310168 out.go:179] * [functional-703623] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 18:41:21.331878  310168 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:41:21.331990  310168 notify.go:220] Checking for updates...
	I1016 18:41:21.337918  310168 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:41:21.341002  310168 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:41:21.343944  310168 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 18:41:21.346875  310168 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 18:41:21.349690  310168 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:41:21.353002  310168 config.go:182] Loaded profile config "functional-703623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:41:21.353107  310168 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:41:21.379521  310168 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 18:41:21.379638  310168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:41:21.448483  310168 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-16 18:41:21.438312198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:41:21.448579  310168 docker.go:318] overlay module found
	I1016 18:41:21.451670  310168 out.go:179] * Using the docker driver based on existing profile
	I1016 18:41:21.454589  310168 start.go:305] selected driver: docker
	I1016 18:41:21.454600  310168 start.go:925] validating driver "docker" against &{Name:functional-703623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-703623 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:41:21.454699  310168 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:41:21.454810  310168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:41:21.510585  310168 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-16 18:41:21.50184095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:41:21.511000  310168 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:41:21.511025  310168 cni.go:84] Creating CNI manager for ""
	I1016 18:41:21.511076  310168 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:41:21.511117  310168 start.go:349] cluster config:
	{Name:functional-703623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-703623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:41:21.516191  310168 out.go:179] * Starting "functional-703623" primary control-plane node in "functional-703623" cluster
	I1016 18:41:21.519223  310168 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:41:21.522190  310168 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:41:21.525016  310168 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:41:21.525068  310168 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 18:41:21.525080  310168 cache.go:58] Caching tarball of preloaded images
	I1016 18:41:21.525102  310168 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:41:21.525253  310168 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 18:41:21.525263  310168 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:41:21.525374  310168 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/config.json ...
	I1016 18:41:21.552490  310168 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:41:21.552509  310168 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:41:21.552520  310168 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:41:21.552550  310168 start.go:360] acquireMachinesLock for functional-703623: {Name:mka874612fd319dc447cda7340b47e9e54e092cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:41:21.552606  310168 start.go:364] duration metric: took 36.981µs to acquireMachinesLock for "functional-703623"
	I1016 18:41:21.552624  310168 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:41:21.552628  310168 fix.go:54] fixHost starting: 
	I1016 18:41:21.552888  310168 cli_runner.go:164] Run: docker container inspect functional-703623 --format={{.State.Status}}
	I1016 18:41:21.574718  310168 fix.go:112] recreateIfNeeded on functional-703623: state=Running err=<nil>
	W1016 18:41:21.574738  310168 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:41:21.577960  310168 out.go:252] * Updating the running docker "functional-703623" container ...
	I1016 18:41:21.577988  310168 machine.go:93] provisionDockerMachine start ...
	I1016 18:41:21.578088  310168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-703623
	I1016 18:41:21.597406  310168 main.go:141] libmachine: Using SSH client type: native
	I1016 18:41:21.597814  310168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1016 18:41:21.597822  310168 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:41:21.749894  310168 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-703623
	
	I1016 18:41:21.749927  310168 ubuntu.go:182] provisioning hostname "functional-703623"
	I1016 18:41:21.750967  310168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-703623
	I1016 18:41:21.769945  310168 main.go:141] libmachine: Using SSH client type: native
	I1016 18:41:21.770254  310168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1016 18:41:21.770263  310168 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-703623 && echo "functional-703623" | sudo tee /etc/hostname
	I1016 18:41:21.926438  310168 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-703623
	
	I1016 18:41:21.926505  310168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-703623
	I1016 18:41:21.945576  310168 main.go:141] libmachine: Using SSH client type: native
	I1016 18:41:21.945879  310168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1016 18:41:21.945892  310168 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-703623' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-703623/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-703623' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:41:22.093636  310168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:41:22.093651  310168 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 18:41:22.093668  310168 ubuntu.go:190] setting up certificates
	I1016 18:41:22.093677  310168 provision.go:84] configureAuth start
	I1016 18:41:22.093739  310168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-703623
	I1016 18:41:22.113353  310168 provision.go:143] copyHostCerts
	I1016 18:41:22.113417  310168 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 18:41:22.113433  310168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:41:22.113507  310168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 18:41:22.113657  310168 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 18:41:22.113662  310168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:41:22.113689  310168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 18:41:22.113749  310168 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 18:41:22.113752  310168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:41:22.113775  310168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 18:41:22.113877  310168 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.functional-703623 san=[127.0.0.1 192.168.49.2 functional-703623 localhost minikube]
	I1016 18:41:22.661271  310168 provision.go:177] copyRemoteCerts
	I1016 18:41:22.661323  310168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:41:22.661370  310168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-703623
	I1016 18:41:22.680862  310168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/functional-703623/id_rsa Username:docker}
	I1016 18:41:22.784987  310168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 18:41:22.802541  310168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1016 18:41:22.820953  310168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:41:22.838810  310168 provision.go:87] duration metric: took 745.110483ms to configureAuth
	I1016 18:41:22.838827  310168 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:41:22.839024  310168 config.go:182] Loaded profile config "functional-703623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:41:22.839139  310168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-703623
	I1016 18:41:22.860774  310168 main.go:141] libmachine: Using SSH client type: native
	I1016 18:41:22.861098  310168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1016 18:41:22.861110  310168 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:41:28.249675  310168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:41:28.249689  310168 machine.go:96] duration metric: took 6.671695793s to provisionDockerMachine
	I1016 18:41:28.249699  310168 start.go:293] postStartSetup for "functional-703623" (driver="docker")
	I1016 18:41:28.249708  310168 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:41:28.249792  310168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:41:28.249835  310168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-703623
	I1016 18:41:28.268845  310168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/functional-703623/id_rsa Username:docker}
	I1016 18:41:28.373182  310168 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:41:28.376932  310168 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:41:28.376949  310168 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:41:28.376958  310168 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 18:41:28.377011  310168 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 18:41:28.377085  310168 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 18:41:28.377184  310168 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/test/nested/copy/290312/hosts -> hosts in /etc/test/nested/copy/290312
	I1016 18:41:28.377232  310168 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/290312
	I1016 18:41:28.384870  310168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:41:28.402878  310168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/test/nested/copy/290312/hosts --> /etc/test/nested/copy/290312/hosts (40 bytes)
	I1016 18:41:28.423348  310168 start.go:296] duration metric: took 173.635104ms for postStartSetup
	I1016 18:41:28.423420  310168 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:41:28.423474  310168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-703623
	I1016 18:41:28.441347  310168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/functional-703623/id_rsa Username:docker}
	I1016 18:41:28.542469  310168 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:41:28.547613  310168 fix.go:56] duration metric: took 6.994979617s for fixHost
	I1016 18:41:28.547628  310168 start.go:83] releasing machines lock for "functional-703623", held for 6.995015662s
	I1016 18:41:28.547707  310168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-703623
	I1016 18:41:28.564314  310168 ssh_runner.go:195] Run: cat /version.json
	I1016 18:41:28.564357  310168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-703623
	I1016 18:41:28.564623  310168 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:41:28.564674  310168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-703623
	I1016 18:41:28.587713  310168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/functional-703623/id_rsa Username:docker}
	I1016 18:41:28.589249  310168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/functional-703623/id_rsa Username:docker}
	I1016 18:41:28.781248  310168 ssh_runner.go:195] Run: systemctl --version
	I1016 18:41:28.787705  310168 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:41:28.824863  310168 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:41:28.829230  310168 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:41:28.829295  310168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:41:28.836963  310168 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:41:28.836976  310168 start.go:495] detecting cgroup driver to use...
	I1016 18:41:28.837005  310168 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 18:41:28.837056  310168 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:41:28.852764  310168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:41:28.866202  310168 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:41:28.866260  310168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:41:28.882264  310168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:41:28.895509  310168 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:41:29.033412  310168 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:41:29.167925  310168 docker.go:234] disabling docker service ...
	I1016 18:41:29.167980  310168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:41:29.183311  310168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:41:29.196819  310168 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:41:29.336998  310168 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:41:29.479750  310168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:41:29.494585  310168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:41:29.510189  310168 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:41:29.510245  310168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:41:29.519252  310168 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 18:41:29.519309  310168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:41:29.528977  310168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:41:29.539632  310168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:41:29.549121  310168 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:41:29.561454  310168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:41:29.570763  310168 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:41:29.579452  310168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:41:29.588486  310168 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:41:29.596361  310168 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:41:29.604120  310168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:41:29.738319  310168 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:41:36.709248  310168 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.970906909s)
	I1016 18:41:36.709265  310168 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:41:36.709318  310168 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:41:36.713283  310168 start.go:563] Will wait 60s for crictl version
	I1016 18:41:36.713340  310168 ssh_runner.go:195] Run: which crictl
	I1016 18:41:36.716915  310168 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:41:36.744689  310168 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:41:36.744772  310168 ssh_runner.go:195] Run: crio --version
	I1016 18:41:36.778104  310168 ssh_runner.go:195] Run: crio --version
	I1016 18:41:36.808590  310168 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:41:36.811615  310168 cli_runner.go:164] Run: docker network inspect functional-703623 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:41:36.826561  310168 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1016 18:41:36.833894  310168 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1016 18:41:36.836774  310168 kubeadm.go:883] updating cluster {Name:functional-703623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-703623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:41:36.836893  310168 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:41:36.836968  310168 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:41:36.871314  310168 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:41:36.871326  310168 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:41:36.871385  310168 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:41:36.897236  310168 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:41:36.897249  310168 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:41:36.897256  310168 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1016 18:41:36.897360  310168 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-703623 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-703623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:41:36.897440  310168 ssh_runner.go:195] Run: crio config
	I1016 18:41:36.963613  310168 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1016 18:41:36.963686  310168 cni.go:84] Creating CNI manager for ""
	I1016 18:41:36.963697  310168 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:41:36.963710  310168 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:41:36.963733  310168 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-703623 NodeName:functional-703623 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:41:36.963887  310168 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-703623"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:41:36.963962  310168 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:41:36.971660  310168 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:41:36.971722  310168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 18:41:36.979442  310168 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1016 18:41:36.991939  310168 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:41:37.004472  310168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1016 18:41:37.019126  310168 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1016 18:41:37.025040  310168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:41:37.171315  310168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:41:37.184589  310168 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623 for IP: 192.168.49.2
	I1016 18:41:37.184601  310168 certs.go:195] generating shared ca certs ...
	I1016 18:41:37.184615  310168 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:41:37.184764  310168 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 18:41:37.184810  310168 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 18:41:37.184816  310168 certs.go:257] generating profile certs ...
	I1016 18:41:37.184896  310168 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.key
	I1016 18:41:37.184956  310168 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/apiserver.key.aa0b83d3
	I1016 18:41:37.184998  310168 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/proxy-client.key
	I1016 18:41:37.185106  310168 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 18:41:37.185155  310168 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 18:41:37.185166  310168 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 18:41:37.185194  310168 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 18:41:37.185213  310168 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:41:37.185235  310168 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 18:41:37.185282  310168 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:41:37.185966  310168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:41:37.205300  310168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 18:41:37.223485  310168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:41:37.241817  310168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 18:41:37.259772  310168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1016 18:41:37.277808  310168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1016 18:41:37.296818  310168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:41:37.314259  310168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:41:37.331866  310168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:41:37.349357  310168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 18:41:37.367034  310168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 18:41:37.384925  310168 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:41:37.397227  310168 ssh_runner.go:195] Run: openssl version
	I1016 18:41:37.403753  310168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 18:41:37.412265  310168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 18:41:37.415893  310168 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 18:41:37.415948  310168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 18:41:37.456886  310168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:41:37.465047  310168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:41:37.473345  310168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:41:37.477010  310168 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:41:37.477064  310168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:41:37.518829  310168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:41:37.526849  310168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 18:41:37.535507  310168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 18:41:37.539517  310168 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 18:41:37.539574  310168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 18:41:37.583934  310168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 18:41:37.592058  310168 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:41:37.596227  310168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:41:37.638034  310168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:41:37.692581  310168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:41:37.755186  310168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:41:37.835938  310168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:41:37.939468  310168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:41:38.055467  310168 kubeadm.go:400] StartCluster: {Name:functional-703623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-703623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:41:38.055559  310168 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:41:38.055644  310168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:41:38.100451  310168 cri.go:89] found id: "baa39f38804da0023ed887be5423865f644bf7254adf7df7b2717dfaf76d45ee"
	I1016 18:41:38.100465  310168 cri.go:89] found id: "5525af681fb786d6e55045dbd6699198c14af8151910d115c7563f1538d8235c"
	I1016 18:41:38.100470  310168 cri.go:89] found id: "2e307f1e62aff44919eb641421512b4101e2e7734c344165170c536dd860cf9d"
	I1016 18:41:38.100473  310168 cri.go:89] found id: "0238b2a725b57ab327a31159956c24f89f7a5014a481052731b76d3030b177bf"
	I1016 18:41:38.100475  310168 cri.go:89] found id: "3200e86150c96237b5e94adacbabbd3dac71dbe0f4ac4e5b2821315321036d35"
	I1016 18:41:38.100478  310168 cri.go:89] found id: "89cd12d0605a25f391571823c1fc5f93dfe63327949b09b6e6c969e185fe0372"
	I1016 18:41:38.100480  310168 cri.go:89] found id: "70e54777397ae82f8c7adab558c7b0429a286a5dce9dbcaf0813a901a894c217"
	I1016 18:41:38.100483  310168 cri.go:89] found id: "c3927f9e7e834b2f87f234a92d35fd7801c76309091c7d269b4c67f1bc29f4eb"
	I1016 18:41:38.100485  310168 cri.go:89] found id: "1bce88dc655785a3ce2fb9c55a7422f5be2ca6454d29ee0c4e113161d5e4e616"
	I1016 18:41:38.100492  310168 cri.go:89] found id: "f5d419a1d829442fae198bb01dacc016c2e53c0774d5a9c013eb72a3151b65b2"
	I1016 18:41:38.100494  310168 cri.go:89] found id: "f63ee134dd36c77754eed94ce10e8cfb14f42bc8f3238511525fea6ecadabdc9"
	I1016 18:41:38.100506  310168 cri.go:89] found id: "a364909e5fdb98177f4a9e84019b998f5ffda36d7e3995cd992d9ed49dc37f16"
	I1016 18:41:38.100508  310168 cri.go:89] found id: "572f033a761d6133a6be4bce6386e78e5129f08bb6132a3aa650959f6092a53a"
	I1016 18:41:38.100510  310168 cri.go:89] found id: "a631452aa748366a2b1c4875325780dce85ed6c24291507cd6c2e886e3dfe373"
	I1016 18:41:38.100520  310168 cri.go:89] found id: "658cfe905b32fa22cff81bf62fdc7c78c838f693f685536391c85f0e29c7dccb"
	I1016 18:41:38.100526  310168 cri.go:89] found id: ""
	I1016 18:41:38.100582  310168 ssh_runner.go:195] Run: sudo runc list -f json
	W1016 18:41:38.114486  310168 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:41:38Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:41:38.114562  310168 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:41:38.132861  310168 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 18:41:38.132870  310168 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 18:41:38.132935  310168 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 18:41:38.146407  310168 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:41:38.146992  310168 kubeconfig.go:125] found "functional-703623" server: "https://192.168.49.2:8441"
	I1016 18:41:38.148701  310168 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 18:41:38.159635  310168 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-16 18:39:39.086477431 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-16 18:41:37.011543938 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1016 18:41:38.159644  310168 kubeadm.go:1160] stopping kube-system containers ...
	I1016 18:41:38.159655  310168 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1016 18:41:38.159722  310168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:41:38.217796  310168 cri.go:89] found id: "baa39f38804da0023ed887be5423865f644bf7254adf7df7b2717dfaf76d45ee"
	I1016 18:41:38.217807  310168 cri.go:89] found id: "5525af681fb786d6e55045dbd6699198c14af8151910d115c7563f1538d8235c"
	I1016 18:41:38.217821  310168 cri.go:89] found id: "2e307f1e62aff44919eb641421512b4101e2e7734c344165170c536dd860cf9d"
	I1016 18:41:38.217824  310168 cri.go:89] found id: "0238b2a725b57ab327a31159956c24f89f7a5014a481052731b76d3030b177bf"
	I1016 18:41:38.217827  310168 cri.go:89] found id: "3200e86150c96237b5e94adacbabbd3dac71dbe0f4ac4e5b2821315321036d35"
	I1016 18:41:38.217829  310168 cri.go:89] found id: "89cd12d0605a25f391571823c1fc5f93dfe63327949b09b6e6c969e185fe0372"
	I1016 18:41:38.217832  310168 cri.go:89] found id: "70e54777397ae82f8c7adab558c7b0429a286a5dce9dbcaf0813a901a894c217"
	I1016 18:41:38.217834  310168 cri.go:89] found id: "c3927f9e7e834b2f87f234a92d35fd7801c76309091c7d269b4c67f1bc29f4eb"
	I1016 18:41:38.217836  310168 cri.go:89] found id: "1bce88dc655785a3ce2fb9c55a7422f5be2ca6454d29ee0c4e113161d5e4e616"
	I1016 18:41:38.217841  310168 cri.go:89] found id: "f5d419a1d829442fae198bb01dacc016c2e53c0774d5a9c013eb72a3151b65b2"
	I1016 18:41:38.217844  310168 cri.go:89] found id: "f63ee134dd36c77754eed94ce10e8cfb14f42bc8f3238511525fea6ecadabdc9"
	I1016 18:41:38.217855  310168 cri.go:89] found id: "a364909e5fdb98177f4a9e84019b998f5ffda36d7e3995cd992d9ed49dc37f16"
	I1016 18:41:38.217858  310168 cri.go:89] found id: "572f033a761d6133a6be4bce6386e78e5129f08bb6132a3aa650959f6092a53a"
	I1016 18:41:38.217859  310168 cri.go:89] found id: "a631452aa748366a2b1c4875325780dce85ed6c24291507cd6c2e886e3dfe373"
	I1016 18:41:38.217862  310168 cri.go:89] found id: ""
	I1016 18:41:38.217866  310168 cri.go:252] Stopping containers: [baa39f38804da0023ed887be5423865f644bf7254adf7df7b2717dfaf76d45ee 5525af681fb786d6e55045dbd6699198c14af8151910d115c7563f1538d8235c 2e307f1e62aff44919eb641421512b4101e2e7734c344165170c536dd860cf9d 0238b2a725b57ab327a31159956c24f89f7a5014a481052731b76d3030b177bf 3200e86150c96237b5e94adacbabbd3dac71dbe0f4ac4e5b2821315321036d35 89cd12d0605a25f391571823c1fc5f93dfe63327949b09b6e6c969e185fe0372 70e54777397ae82f8c7adab558c7b0429a286a5dce9dbcaf0813a901a894c217 c3927f9e7e834b2f87f234a92d35fd7801c76309091c7d269b4c67f1bc29f4eb 1bce88dc655785a3ce2fb9c55a7422f5be2ca6454d29ee0c4e113161d5e4e616 f5d419a1d829442fae198bb01dacc016c2e53c0774d5a9c013eb72a3151b65b2 f63ee134dd36c77754eed94ce10e8cfb14f42bc8f3238511525fea6ecadabdc9 a364909e5fdb98177f4a9e84019b998f5ffda36d7e3995cd992d9ed49dc37f16 572f033a761d6133a6be4bce6386e78e5129f08bb6132a3aa650959f6092a53a a631452aa748366a2b1c4875325780dce85ed6c24291507cd6c2e886e3dfe373]
	I1016 18:41:38.217935  310168 ssh_runner.go:195] Run: which crictl
	I1016 18:41:38.224336  310168 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 baa39f38804da0023ed887be5423865f644bf7254adf7df7b2717dfaf76d45ee 5525af681fb786d6e55045dbd6699198c14af8151910d115c7563f1538d8235c 2e307f1e62aff44919eb641421512b4101e2e7734c344165170c536dd860cf9d 0238b2a725b57ab327a31159956c24f89f7a5014a481052731b76d3030b177bf 3200e86150c96237b5e94adacbabbd3dac71dbe0f4ac4e5b2821315321036d35 89cd12d0605a25f391571823c1fc5f93dfe63327949b09b6e6c969e185fe0372 70e54777397ae82f8c7adab558c7b0429a286a5dce9dbcaf0813a901a894c217 c3927f9e7e834b2f87f234a92d35fd7801c76309091c7d269b4c67f1bc29f4eb 1bce88dc655785a3ce2fb9c55a7422f5be2ca6454d29ee0c4e113161d5e4e616 f5d419a1d829442fae198bb01dacc016c2e53c0774d5a9c013eb72a3151b65b2 f63ee134dd36c77754eed94ce10e8cfb14f42bc8f3238511525fea6ecadabdc9 a364909e5fdb98177f4a9e84019b998f5ffda36d7e3995cd992d9ed49dc37f16 572f033a761d6133a6be4bce6386e78e5129f08bb6132a3aa650959f6092a53a a631452aa748366a2b1c4875325780dce85ed6c24291507cd6c2e886e3dfe373
	I1016 18:41:54.458592  310168 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 baa39f38804da0023ed887be5423865f644bf7254adf7df7b2717dfaf76d45ee 5525af681fb786d6e55045dbd6699198c14af8151910d115c7563f1538d8235c 2e307f1e62aff44919eb641421512b4101e2e7734c344165170c536dd860cf9d 0238b2a725b57ab327a31159956c24f89f7a5014a481052731b76d3030b177bf 3200e86150c96237b5e94adacbabbd3dac71dbe0f4ac4e5b2821315321036d35 89cd12d0605a25f391571823c1fc5f93dfe63327949b09b6e6c969e185fe0372 70e54777397ae82f8c7adab558c7b0429a286a5dce9dbcaf0813a901a894c217 c3927f9e7e834b2f87f234a92d35fd7801c76309091c7d269b4c67f1bc29f4eb 1bce88dc655785a3ce2fb9c55a7422f5be2ca6454d29ee0c4e113161d5e4e616 f5d419a1d829442fae198bb01dacc016c2e53c0774d5a9c013eb72a3151b65b2 f63ee134dd36c77754eed94ce10e8cfb14f42bc8f3238511525fea6ecadabdc9 a364909e5fdb98177f4a9e84019b998f5ffda36d7e3995cd992d9ed49dc37f16 572f033a761d6133a6be4bce6386e78e5129f08bb6132a3aa650959f6092a53a a631452aa748366a2b1c4875325780dce85ed6c24291507cd6c2e886e3dfe373:
(16.234221932s)
	I1016 18:41:54.458657  310168 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1016 18:41:54.572032  310168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 18:41:54.580255  310168 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct 16 18:39 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct 16 18:39 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct 16 18:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct 16 18:39 /etc/kubernetes/scheduler.conf
	
	I1016 18:41:54.580316  310168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1016 18:41:54.589009  310168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1016 18:41:54.597245  310168 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:41:54.597340  310168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 18:41:54.605330  310168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1016 18:41:54.613270  310168 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:41:54.613327  310168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1016 18:41:54.621254  310168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1016 18:41:54.629243  310168 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:41:54.629300  310168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 18:41:54.636839  310168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 18:41:54.645010  310168 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:41:54.690556  310168 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:41:56.518977  310168 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.828395465s)
	I1016 18:41:56.519048  310168 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:41:56.748552  310168 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:41:56.833418  310168 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:41:56.931916  310168 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:41:56.931981  310168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:41:57.432492  310168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:41:57.932761  310168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:41:57.946301  310168 api_server.go:72] duration metric: took 1.014395932s to wait for apiserver process to appear ...
	I1016 18:41:57.946315  310168 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:41:57.946331  310168 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1016 18:42:01.517287  310168 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1016 18:42:01.517302  310168 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1016 18:42:01.517314  310168 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1016 18:42:01.659838  310168 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1016 18:42:01.659854  310168 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1016 18:42:01.947294  310168 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1016 18:42:01.963473  310168 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:42:01.963490  310168 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:42:02.447127  310168 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1016 18:42:02.457041  310168 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:42:02.457059  310168 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:42:02.946419  310168 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1016 18:42:02.954521  310168 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1016 18:42:02.968025  310168 api_server.go:141] control plane version: v1.34.1
	I1016 18:42:02.968042  310168 api_server.go:131] duration metric: took 5.021722099s to wait for apiserver health ...
	I1016 18:42:02.968050  310168 cni.go:84] Creating CNI manager for ""
	I1016 18:42:02.968055  310168 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:42:02.972067  310168 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 18:42:02.975078  310168 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 18:42:02.979746  310168 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 18:42:02.979758  310168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 18:42:02.992716  310168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 18:42:03.446546  310168 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:42:03.450105  310168 system_pods.go:59] 8 kube-system pods found
	I1016 18:42:03.450128  310168 system_pods.go:61] "coredns-66bc5c9577-24tnj" [74c97f28-9d63-4653-8819-36cbb9ae4413] Running
	I1016 18:42:03.450138  310168 system_pods.go:61] "etcd-functional-703623" [97d27649-c8ee-49f7-ad81-ca12305f95c4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 18:42:03.450142  310168 system_pods.go:61] "kindnet-59fhg" [8001bb10-055b-419b-8d94-425ce01f3dfd] Running
	I1016 18:42:03.450148  310168 system_pods.go:61] "kube-apiserver-functional-703623" [d89b9639-8abf-45c5-8676-c7fc280fa446] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:42:03.450154  310168 system_pods.go:61] "kube-controller-manager-functional-703623" [ea3a1672-db77-4a88-9ba1-ea3999314080] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:42:03.450158  310168 system_pods.go:61] "kube-proxy-84brh" [b3bc277b-f755-4bde-ab57-6216ae9ad3de] Running
	I1016 18:42:03.450166  310168 system_pods.go:61] "kube-scheduler-functional-703623" [1542ff9b-e5eb-461a-8461-6e37bd69c375] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:42:03.450171  310168 system_pods.go:61] "storage-provisioner" [d85fd27e-6100-424d-b766-222e782c8a55] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:42:03.450191  310168 system_pods.go:74] duration metric: took 3.633574ms to wait for pod list to return data ...
	I1016 18:42:03.450197  310168 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:42:03.452993  310168 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 18:42:03.453011  310168 node_conditions.go:123] node cpu capacity is 2
	I1016 18:42:03.453020  310168 node_conditions.go:105] duration metric: took 2.819888ms to run NodePressure ...
	I1016 18:42:03.453080  310168 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:42:03.705042  310168 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1016 18:42:03.710773  310168 kubeadm.go:743] kubelet initialised
	I1016 18:42:03.710784  310168 kubeadm.go:744] duration metric: took 5.729007ms waiting for restarted kubelet to initialise ...
	I1016 18:42:03.710798  310168 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 18:42:03.720273  310168 ops.go:34] apiserver oom_adj: -16
	I1016 18:42:03.720283  310168 kubeadm.go:601] duration metric: took 25.587408358s to restartPrimaryControlPlane
	I1016 18:42:03.720291  310168 kubeadm.go:402] duration metric: took 25.664831756s to StartCluster
	I1016 18:42:03.720305  310168 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:42:03.720362  310168 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:42:03.720963  310168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:42:03.721198  310168 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:42:03.721456  310168 config.go:182] Loaded profile config "functional-703623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:42:03.721492  310168 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:42:03.721560  310168 addons.go:69] Setting default-storageclass=true in profile "functional-703623"
	I1016 18:42:03.721570  310168 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-703623"
	I1016 18:42:03.721852  310168 cli_runner.go:164] Run: docker container inspect functional-703623 --format={{.State.Status}}
	I1016 18:42:03.721996  310168 addons.go:69] Setting storage-provisioner=true in profile "functional-703623"
	I1016 18:42:03.722008  310168 addons.go:238] Setting addon storage-provisioner=true in "functional-703623"
	W1016 18:42:03.722014  310168 addons.go:247] addon storage-provisioner should already be in state true
	I1016 18:42:03.722052  310168 host.go:66] Checking if "functional-703623" exists ...
	I1016 18:42:03.722715  310168 cli_runner.go:164] Run: docker container inspect functional-703623 --format={{.State.Status}}
	I1016 18:42:03.726446  310168 out.go:179] * Verifying Kubernetes components...
	I1016 18:42:03.731123  310168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:42:03.757998  310168 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:42:03.758665  310168 addons.go:238] Setting addon default-storageclass=true in "functional-703623"
	W1016 18:42:03.758674  310168 addons.go:247] addon default-storageclass should already be in state true
	I1016 18:42:03.758716  310168 host.go:66] Checking if "functional-703623" exists ...
	I1016 18:42:03.759352  310168 cli_runner.go:164] Run: docker container inspect functional-703623 --format={{.State.Status}}
	I1016 18:42:03.760879  310168 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:42:03.760889  310168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:42:03.760941  310168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-703623
	I1016 18:42:03.799386  310168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/functional-703623/id_rsa Username:docker}
	I1016 18:42:03.801866  310168 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:42:03.801884  310168 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:42:03.801941  310168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-703623
	I1016 18:42:03.830787  310168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/functional-703623/id_rsa Username:docker}
	I1016 18:42:03.951313  310168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:42:04.037215  310168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:42:04.068753  310168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:42:04.767457  310168 node_ready.go:35] waiting up to 6m0s for node "functional-703623" to be "Ready" ...
	I1016 18:42:04.771140  310168 node_ready.go:49] node "functional-703623" is "Ready"
	I1016 18:42:04.771155  310168 node_ready.go:38] duration metric: took 3.682773ms for node "functional-703623" to be "Ready" ...
	I1016 18:42:04.771167  310168 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:42:04.771236  310168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:42:04.779881  310168 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1016 18:42:04.782863  310168 addons.go:514] duration metric: took 1.061354901s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1016 18:42:04.787727  310168 api_server.go:72] duration metric: took 1.066504233s to wait for apiserver process to appear ...
	I1016 18:42:04.787739  310168 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:42:04.787774  310168 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1016 18:42:04.797385  310168 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1016 18:42:04.798395  310168 api_server.go:141] control plane version: v1.34.1
	I1016 18:42:04.798408  310168 api_server.go:131] duration metric: took 10.663372ms to wait for apiserver health ...
	I1016 18:42:04.798426  310168 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:42:04.801493  310168 system_pods.go:59] 8 kube-system pods found
	I1016 18:42:04.801508  310168 system_pods.go:61] "coredns-66bc5c9577-24tnj" [74c97f28-9d63-4653-8819-36cbb9ae4413] Running
	I1016 18:42:04.801517  310168 system_pods.go:61] "etcd-functional-703623" [97d27649-c8ee-49f7-ad81-ca12305f95c4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 18:42:04.801521  310168 system_pods.go:61] "kindnet-59fhg" [8001bb10-055b-419b-8d94-425ce01f3dfd] Running
	I1016 18:42:04.801528  310168 system_pods.go:61] "kube-apiserver-functional-703623" [d89b9639-8abf-45c5-8676-c7fc280fa446] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:42:04.801535  310168 system_pods.go:61] "kube-controller-manager-functional-703623" [ea3a1672-db77-4a88-9ba1-ea3999314080] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:42:04.801539  310168 system_pods.go:61] "kube-proxy-84brh" [b3bc277b-f755-4bde-ab57-6216ae9ad3de] Running
	I1016 18:42:04.801545  310168 system_pods.go:61] "kube-scheduler-functional-703623" [1542ff9b-e5eb-461a-8461-6e37bd69c375] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:42:04.801550  310168 system_pods.go:61] "storage-provisioner" [d85fd27e-6100-424d-b766-222e782c8a55] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:42:04.801555  310168 system_pods.go:74] duration metric: took 3.123505ms to wait for pod list to return data ...
	I1016 18:42:04.801561  310168 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:42:04.803949  310168 default_sa.go:45] found service account: "default"
	I1016 18:42:04.803962  310168 default_sa.go:55] duration metric: took 2.396368ms for default service account to be created ...
	I1016 18:42:04.803970  310168 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 18:42:04.806974  310168 system_pods.go:86] 8 kube-system pods found
	I1016 18:42:04.806989  310168 system_pods.go:89] "coredns-66bc5c9577-24tnj" [74c97f28-9d63-4653-8819-36cbb9ae4413] Running
	I1016 18:42:04.806998  310168 system_pods.go:89] "etcd-functional-703623" [97d27649-c8ee-49f7-ad81-ca12305f95c4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 18:42:04.807002  310168 system_pods.go:89] "kindnet-59fhg" [8001bb10-055b-419b-8d94-425ce01f3dfd] Running
	I1016 18:42:04.807009  310168 system_pods.go:89] "kube-apiserver-functional-703623" [d89b9639-8abf-45c5-8676-c7fc280fa446] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:42:04.807014  310168 system_pods.go:89] "kube-controller-manager-functional-703623" [ea3a1672-db77-4a88-9ba1-ea3999314080] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:42:04.807018  310168 system_pods.go:89] "kube-proxy-84brh" [b3bc277b-f755-4bde-ab57-6216ae9ad3de] Running
	I1016 18:42:04.807023  310168 system_pods.go:89] "kube-scheduler-functional-703623" [1542ff9b-e5eb-461a-8461-6e37bd69c375] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:42:04.807027  310168 system_pods.go:89] "storage-provisioner" [d85fd27e-6100-424d-b766-222e782c8a55] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:42:04.807032  310168 system_pods.go:126] duration metric: took 3.05834ms to wait for k8s-apps to be running ...
	I1016 18:42:04.807039  310168 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 18:42:04.807101  310168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:42:04.820387  310168 system_svc.go:56] duration metric: took 13.337732ms WaitForService to wait for kubelet
	I1016 18:42:04.820405  310168 kubeadm.go:586] duration metric: took 1.099186525s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:42:04.820423  310168 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:42:04.823035  310168 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 18:42:04.823050  310168 node_conditions.go:123] node cpu capacity is 2
	I1016 18:42:04.823060  310168 node_conditions.go:105] duration metric: took 2.632505ms to run NodePressure ...
	I1016 18:42:04.823072  310168 start.go:241] waiting for startup goroutines ...
	I1016 18:42:04.823078  310168 start.go:246] waiting for cluster config update ...
	I1016 18:42:04.823088  310168 start.go:255] writing updated cluster config ...
	I1016 18:42:04.823388  310168 ssh_runner.go:195] Run: rm -f paused
	I1016 18:42:04.826884  310168 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:42:04.831059  310168 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-24tnj" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:42:04.836065  310168 pod_ready.go:94] pod "coredns-66bc5c9577-24tnj" is "Ready"
	I1016 18:42:04.836079  310168 pod_ready.go:86] duration metric: took 4.995592ms for pod "coredns-66bc5c9577-24tnj" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:42:04.838640  310168 pod_ready.go:83] waiting for pod "etcd-functional-703623" in "kube-system" namespace to be "Ready" or be gone ...
	W1016 18:42:06.844730  310168 pod_ready.go:104] pod "etcd-functional-703623" is not "Ready", error: <nil>
	W1016 18:42:09.343911  310168 pod_ready.go:104] pod "etcd-functional-703623" is not "Ready", error: <nil>
	W1016 18:42:11.344113  310168 pod_ready.go:104] pod "etcd-functional-703623" is not "Ready", error: <nil>
	W1016 18:42:13.344707  310168 pod_ready.go:104] pod "etcd-functional-703623" is not "Ready", error: <nil>
	I1016 18:42:13.844739  310168 pod_ready.go:94] pod "etcd-functional-703623" is "Ready"
	I1016 18:42:13.844754  310168 pod_ready.go:86] duration metric: took 9.006101016s for pod "etcd-functional-703623" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:42:13.847402  310168 pod_ready.go:83] waiting for pod "kube-apiserver-functional-703623" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:42:14.352830  310168 pod_ready.go:94] pod "kube-apiserver-functional-703623" is "Ready"
	I1016 18:42:14.352844  310168 pod_ready.go:86] duration metric: took 505.428503ms for pod "kube-apiserver-functional-703623" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:42:14.355351  310168 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-703623" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:42:14.361686  310168 pod_ready.go:94] pod "kube-controller-manager-functional-703623" is "Ready"
	I1016 18:42:14.361699  310168 pod_ready.go:86] duration metric: took 6.336474ms for pod "kube-controller-manager-functional-703623" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:42:14.364240  310168 pod_ready.go:83] waiting for pod "kube-proxy-84brh" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:42:14.442392  310168 pod_ready.go:94] pod "kube-proxy-84brh" is "Ready"
	I1016 18:42:14.442405  310168 pod_ready.go:86] duration metric: took 78.153502ms for pod "kube-proxy-84brh" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:42:14.642564  310168 pod_ready.go:83] waiting for pod "kube-scheduler-functional-703623" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:42:15.059797  310168 pod_ready.go:94] pod "kube-scheduler-functional-703623" is "Ready"
	I1016 18:42:15.059825  310168 pod_ready.go:86] duration metric: took 417.24692ms for pod "kube-scheduler-functional-703623" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:42:15.059836  310168 pod_ready.go:40] duration metric: took 10.232919154s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:42:15.131628  310168 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1016 18:42:15.134992  310168 out.go:179] * Done! kubectl is now configured to use "functional-703623" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 16 18:42:56 functional-703623 crio[3528]: time="2025-10-16T18:42:56.921631462Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-7t72d Namespace:default ID:9893c625cc05016f7dedc32e0027185a4d333555b350aae1bce158a4c26121e0 UID:e85ae532-c9db-4bdc-ba0b-a5d371790556 NetNS:/var/run/netns/abecfd21-f572-4cb5-be2a-39edfd9977e0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079200}] Aliases:map[]}"
	Oct 16 18:42:56 functional-703623 crio[3528]: time="2025-10-16T18:42:56.921780879Z" level=info msg="Checking pod default_hello-node-75c85bcc94-7t72d for CNI network kindnet (type=ptp)"
	Oct 16 18:42:56 functional-703623 crio[3528]: time="2025-10-16T18:42:56.927894923Z" level=info msg="Ran pod sandbox 9893c625cc05016f7dedc32e0027185a4d333555b350aae1bce158a4c26121e0 with infra container: default/hello-node-75c85bcc94-7t72d/POD" id=4a6f4329-3d43-4e84-a624-03d96293d284 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:42:56 functional-703623 crio[3528]: time="2025-10-16T18:42:56.93241177Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=acc4cb2e-3a4f-497c-8e77-83a885a9153d name=/runtime.v1.ImageService/PullImage
	Oct 16 18:42:56 functional-703623 crio[3528]: time="2025-10-16T18:42:56.958760613Z" level=info msg="Stopping pod sandbox: c81b589a3e6fb8bf0f217d2973ef625eb6a9044476cea2da0c3823a396d13f02" id=66e4a67d-9e44-4791-bca5-26a0a0199635 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 16 18:42:56 functional-703623 crio[3528]: time="2025-10-16T18:42:56.958818288Z" level=info msg="Stopped pod sandbox (already stopped): c81b589a3e6fb8bf0f217d2973ef625eb6a9044476cea2da0c3823a396d13f02" id=66e4a67d-9e44-4791-bca5-26a0a0199635 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 16 18:42:56 functional-703623 crio[3528]: time="2025-10-16T18:42:56.959305818Z" level=info msg="Removing pod sandbox: c81b589a3e6fb8bf0f217d2973ef625eb6a9044476cea2da0c3823a396d13f02" id=c29eab2e-b12e-4f02-9f69-fa6085a9dea8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 16 18:42:56 functional-703623 crio[3528]: time="2025-10-16T18:42:56.96322315Z" level=info msg="Removed pod sandbox: c81b589a3e6fb8bf0f217d2973ef625eb6a9044476cea2da0c3823a396d13f02" id=c29eab2e-b12e-4f02-9f69-fa6085a9dea8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 16 18:42:56 functional-703623 crio[3528]: time="2025-10-16T18:42:56.963770282Z" level=info msg="Stopping pod sandbox: 4e122d219be3f783aa61d630e5166e2a6d550f974fd9e642f8e62b923de1219d" id=65ba0483-2c05-42a8-aa70-4119acefb0de name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 16 18:42:56 functional-703623 crio[3528]: time="2025-10-16T18:42:56.963812433Z" level=info msg="Stopped pod sandbox (already stopped): 4e122d219be3f783aa61d630e5166e2a6d550f974fd9e642f8e62b923de1219d" id=65ba0483-2c05-42a8-aa70-4119acefb0de name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 16 18:42:56 functional-703623 crio[3528]: time="2025-10-16T18:42:56.964242058Z" level=info msg="Removing pod sandbox: 4e122d219be3f783aa61d630e5166e2a6d550f974fd9e642f8e62b923de1219d" id=68b072cc-7ccb-4832-95de-e47a11c27bac name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 16 18:42:56 functional-703623 crio[3528]: time="2025-10-16T18:42:56.968302276Z" level=info msg="Removed pod sandbox: 4e122d219be3f783aa61d630e5166e2a6d550f974fd9e642f8e62b923de1219d" id=68b072cc-7ccb-4832-95de-e47a11c27bac name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 16 18:42:56 functional-703623 crio[3528]: time="2025-10-16T18:42:56.968859559Z" level=info msg="Stopping pod sandbox: 8f47b67b30e7555c18603048be849ac9bb967a0dc596259e7b6f64f74fcb7530" id=9e91e266-49df-459e-8258-571459dc7dcf name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 16 18:42:56 functional-703623 crio[3528]: time="2025-10-16T18:42:56.968910119Z" level=info msg="Stopped pod sandbox (already stopped): 8f47b67b30e7555c18603048be849ac9bb967a0dc596259e7b6f64f74fcb7530" id=9e91e266-49df-459e-8258-571459dc7dcf name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 16 18:42:56 functional-703623 crio[3528]: time="2025-10-16T18:42:56.969367799Z" level=info msg="Removing pod sandbox: 8f47b67b30e7555c18603048be849ac9bb967a0dc596259e7b6f64f74fcb7530" id=a0b13689-58ff-4328-81e5-b6b9c4d8a409 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 16 18:42:56 functional-703623 crio[3528]: time="2025-10-16T18:42:56.973432448Z" level=info msg="Removed pod sandbox: 8f47b67b30e7555c18603048be849ac9bb967a0dc596259e7b6f64f74fcb7530" id=a0b13689-58ff-4328-81e5-b6b9c4d8a409 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 16 18:43:08 functional-703623 crio[3528]: time="2025-10-16T18:43:08.87356183Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=745889dc-af7a-4b40-bca8-15ed0e708de3 name=/runtime.v1.ImageService/PullImage
	Oct 16 18:43:18 functional-703623 crio[3528]: time="2025-10-16T18:43:18.874238508Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3b8eb97a-effd-4333-8e79-c038ffb92e29 name=/runtime.v1.ImageService/PullImage
	Oct 16 18:43:32 functional-703623 crio[3528]: time="2025-10-16T18:43:32.87382645Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=990291d2-7be5-4c29-9552-ef3f3ec11c5c name=/runtime.v1.ImageService/PullImage
	Oct 16 18:44:08 functional-703623 crio[3528]: time="2025-10-16T18:44:08.874280601Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=28b8120c-5940-4810-ad0c-5deced33633d name=/runtime.v1.ImageService/PullImage
	Oct 16 18:44:27 functional-703623 crio[3528]: time="2025-10-16T18:44:27.873769824Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ba54bb11-0c6d-480e-920c-988ecb5e246d name=/runtime.v1.ImageService/PullImage
	Oct 16 18:45:36 functional-703623 crio[3528]: time="2025-10-16T18:45:36.875037977Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=82b62699-df62-4913-93d8-b5dd319bd05b name=/runtime.v1.ImageService/PullImage
	Oct 16 18:45:53 functional-703623 crio[3528]: time="2025-10-16T18:45:53.873171705Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=735456b9-5fff-4899-b191-7f067917d654 name=/runtime.v1.ImageService/PullImage
	Oct 16 18:48:25 functional-703623 crio[3528]: time="2025-10-16T18:48:25.873409633Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5a2f76fa-69b9-4465-91ac-8785f5df57d9 name=/runtime.v1.ImageService/PullImage
	Oct 16 18:48:39 functional-703623 crio[3528]: time="2025-10-16T18:48:39.873366413Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=33f2e146-63ee-40ec-b0df-5a04e00ef217 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	2a19a2a2abf90       docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a   9 minutes ago       Running             myfrontend                0                   51dc5fa9dc1ce       sp-pod                                      default
	260eba91c9818       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0   10 minutes ago      Running             nginx                     0                   3b9b03bb595ab       nginx-svc                                   default
	dfdb95ab0e7c7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       4                   cecc66813c9fa       storage-provisioner                         kube-system
	8956ad4a5483f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               3                   1afa99be3a41d       kindnet-59fhg                               kube-system
	3b9eed011dc44       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Exited              storage-provisioner       3                   cecc66813c9fa       storage-provisioner                         kube-system
	feed35ebf2041       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   2c11ccda783d8       kube-apiserver-functional-703623            kube-system
	3d9c946253d44       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   3                   9c27d69a28d7d       kube-controller-manager-functional-703623   kube-system
	05ee649c3f129       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      3                   0e03e44448d82       etcd-functional-703623                      kube-system
	bfa2ab16642d9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            3                   11aaef6d649d1       kube-scheduler-functional-703623            kube-system
	aff780cebd209       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   d6619555872d0       coredns-66bc5c9577-24tnj                    kube-system
	baa39f38804da       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Exited              kindnet-cni               2                   1afa99be3a41d       kindnet-59fhg                               kube-system
	7711e8cf9f1d2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   29f33d2b1b78c       kube-proxy-84brh                            kube-system
	5525af681fb78       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Exited              kube-scheduler            2                   11aaef6d649d1       kube-scheduler-functional-703623            kube-system
	0238b2a725b57       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Exited              etcd                      2                   0e03e44448d82       etcd-functional-703623                      kube-system
	89cd12d0605a2       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Exited              kube-controller-manager   2                   9c27d69a28d7d       kube-controller-manager-functional-703623   kube-system
	f63ee134dd36c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   d6619555872d0       coredns-66bc5c9577-24tnj                    kube-system
	a364909e5fdb9       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   29f33d2b1b78c       kube-proxy-84brh                            kube-system
	
	
	==> coredns [aff780cebd209fb676a566c87711504c337394fbbfe72fc988f2bc16d8370a7e] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37823 - 19860 "HINFO IN 568160508465683198.9018692913069858338. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.023906498s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f63ee134dd36c77754eed94ce10e8cfb14f42bc8f3238511525fea6ecadabdc9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37257 - 26592 "HINFO IN 1347879774137885559.3018553840236844350. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02280488s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-703623
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-703623
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=functional-703623
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_39_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:39:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-703623
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:52:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:52:14 +0000   Thu, 16 Oct 2025 18:39:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:52:14 +0000   Thu, 16 Oct 2025 18:39:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:52:14 +0000   Thu, 16 Oct 2025 18:39:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 18:52:14 +0000   Thu, 16 Oct 2025 18:40:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-703623
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                2bd34da2-054e-40d6-97ba-cd8ce14e9b09
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-7t72d                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m41s
	  default                     hello-node-connect-7d85dfc575-7z8qd          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	  kube-system                 coredns-66bc5c9577-24tnj                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-703623                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-59fhg                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-703623             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-703623    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-84brh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-703623             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-703623 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-703623 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-703623 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-703623 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-703623 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-703623 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           12m                node-controller  Node functional-703623 event: Registered Node functional-703623 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-703623 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-703623 event: Registered Node functional-703623 in Controller
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-703623 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-703623 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-703623 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-703623 event: Registered Node functional-703623 in Controller
	
	
	==> dmesg <==
	[Oct16 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015294] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510048] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035217] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.777829] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.353148] kauditd_printk_skb: 36 callbacks suppressed
	[Oct16 17:39] FS-Cache: Duplicate cookie detected
	[  +0.000746] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001056] FS-Cache: O-cookie d=00000000a1708097{9P.session} n=00000000c48db394
	[  +0.001150] FS-Cache: O-key=[10] '34323935323233313231'
	[  +0.000794] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000971] FS-Cache: N-cookie d=00000000a1708097{9P.session} n=0000000008f2874d
	[  +0.001104] FS-Cache: N-key=[10] '34323935323233313231'
	[Oct16 17:40] hrtimer: interrupt took 46683506 ns
	[Oct16 18:30] kauditd_printk_skb: 8 callbacks suppressed
	[Oct16 18:32] overlayfs: idmapped layers are currently not supported
	[  +0.067059] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct16 18:38] overlayfs: idmapped layers are currently not supported
	[Oct16 18:39] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0238b2a725b57ab327a31159956c24f89f7a5014a481052731b76d3030b177bf] <==
	{"level":"warn","ts":"2025-10-16T18:41:40.874037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:41:40.880029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:41:40.904396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:41:40.931180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:41:40.955321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:41:40.966428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:41:41.065966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49976","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-16T18:41:54.197475Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-16T18:41:54.197536Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-703623","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-16T18:41:54.197624Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-16T18:41:54.199304Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-16T18:41:54.199365Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-16T18:41:54.199384Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-16T18:41:54.199449Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-16T18:41:54.199466Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-16T18:41:54.199479Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-16T18:41:54.199522Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-16T18:41:54.199531Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-16T18:41:54.199572Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-16T18:41:54.199582Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-16T18:41:54.199588Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-16T18:41:54.203536Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-16T18:41:54.203635Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-16T18:41:54.203670Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-16T18:41:54.203678Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-703623","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [05ee649c3f1295fab0cd6878e9df8e533599d81e2d9e6f34ad207ea779cee17f] <==
	{"level":"warn","ts":"2025-10-16T18:42:00.310124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.332356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.347717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.372664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.386900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.417866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.450266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.473694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.490119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.502725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.518516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.564642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.573234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.581682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.602858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.618431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.636469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.653448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.707505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.749959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.762492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:42:00.822741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53646","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-16T18:51:59.416332Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1148}
	{"level":"info","ts":"2025-10-16T18:51:59.441020Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1148,"took":"24.37225ms","hash":2673697178,"current-db-size-bytes":3391488,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1449984,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-10-16T18:51:59.441104Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2673697178,"revision":1148,"compact-revision":-1}
	
	
	==> kernel <==
	 18:52:37 up  1:34,  0 user,  load average: 0.31, 0.44, 1.36
	Linux functional-703623 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8956ad4a5483f5e599b6f9fa914ee65cd2e005641ce38f433de2bc345bfd8077] <==
	I1016 18:50:32.513615       1 main.go:301] handling current node
	I1016 18:50:42.518620       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:50:42.518660       1 main.go:301] handling current node
	I1016 18:50:52.519251       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:50:52.519363       1 main.go:301] handling current node
	I1016 18:51:02.520605       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:51:02.520740       1 main.go:301] handling current node
	I1016 18:51:12.513343       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:51:12.513379       1 main.go:301] handling current node
	I1016 18:51:22.514006       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:51:22.514173       1 main.go:301] handling current node
	I1016 18:51:32.513595       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:51:32.513629       1 main.go:301] handling current node
	I1016 18:51:42.514931       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:51:42.514967       1 main.go:301] handling current node
	I1016 18:51:52.519138       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:51:52.519175       1 main.go:301] handling current node
	I1016 18:52:02.517430       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:52:02.517567       1 main.go:301] handling current node
	I1016 18:52:12.519629       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:52:12.519756       1 main.go:301] handling current node
	I1016 18:52:22.519128       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:52:22.519240       1 main.go:301] handling current node
	I1016 18:52:32.514080       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:52:32.514118       1 main.go:301] handling current node
	
	
	==> kindnet [baa39f38804da0023ed887be5423865f644bf7254adf7df7b2717dfaf76d45ee] <==
	I1016 18:41:38.052176       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 18:41:38.052439       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1016 18:41:38.052573       1 main.go:148] setting mtu 1500 for CNI 
	I1016 18:41:38.052585       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 18:41:38.052598       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T18:41:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	
	
	==> kube-apiserver [feed35ebf2041425ae4d6bcc4bf67fc9297c199c438c133b084f62d1c4791622] <==
	I1016 18:42:01.789433       1 autoregister_controller.go:144] Starting autoregister controller
	I1016 18:42:01.789462       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 18:42:01.789492       1 cache.go:39] Caches are synced for autoregister controller
	I1016 18:42:01.794468       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1016 18:42:01.794524       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1016 18:42:01.821637       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:42:01.833084       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1016 18:42:01.849975       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 18:42:01.938740       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 18:42:02.498923       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1016 18:42:02.785269       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1016 18:42:02.786771       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 18:42:02.792686       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 18:42:03.439587       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1016 18:42:03.557475       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 18:42:03.631128       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 18:42:03.639377       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 18:42:12.073571       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 18:42:18.478102       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.98.220"}
	I1016 18:42:24.838544       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.106.87.177"}
	I1016 18:42:35.407317       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.233.255"}
	E1016 18:42:48.369198       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:32830: use of closed network connection
	E1016 18:42:49.187363       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1016 18:42:56.686120       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.174.226"}
	I1016 18:52:01.747272       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [3d9c946253d44f5d8c7afef5002f274bd45b6f4937bedf9f963500ce3551b657] <==
	I1016 18:42:05.061546       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1016 18:42:05.061577       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1016 18:42:05.061624       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1016 18:42:05.061730       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1016 18:42:05.061752       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1016 18:42:05.063160       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1016 18:42:05.066414       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1016 18:42:05.066787       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 18:42:05.067817       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1016 18:42:05.067900       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 18:42:05.067914       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 18:42:05.067924       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 18:42:05.073021       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1016 18:42:05.075248       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1016 18:42:05.076409       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1016 18:42:05.078946       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:42:05.082258       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1016 18:42:05.086596       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1016 18:42:05.101254       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1016 18:42:05.106481       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 18:42:05.110895       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 18:42:05.112236       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1016 18:42:05.112634       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1016 18:42:05.112891       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1016 18:42:05.112980       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	
	
	==> kube-controller-manager [89cd12d0605a25f391571823c1fc5f93dfe63327949b09b6e6c969e185fe0372] <==
	I1016 18:41:38.330852       1 serving.go:386] Generated self-signed cert in-memory
	I1016 18:41:40.758557       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1016 18:41:40.758745       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:41:40.762528       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1016 18:41:40.763230       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1016 18:41:40.763484       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 18:41:40.763608       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1016 18:41:51.858033       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [7711e8cf9f1d2d17a5c0f6464b4b2ef480c1f6dafb48b2067d436f99e97519d4] <==
	I1016 18:41:40.919877       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:41:41.909592       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:41:42.286149       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:41:42.286215       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1016 18:41:42.286399       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:41:42.415696       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:41:42.415841       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:41:42.420414       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:41:42.420872       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:41:42.420998       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:41:42.429195       1 config.go:200] "Starting service config controller"
	I1016 18:41:42.437264       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:41:42.430689       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:41:42.447136       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:41:42.447147       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 18:41:42.430666       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:41:42.447167       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:41:42.447172       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1016 18:41:42.431511       1 config.go:309] "Starting node config controller"
	I1016 18:41:42.447216       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:41:42.447222       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:41:42.541421       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [a364909e5fdb98177f4a9e84019b998f5ffda36d7e3995cd992d9ed49dc37f16] <==
	I1016 18:40:57.087209       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:40:58.237319       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:41:00.649403       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:41:00.649547       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1016 18:41:00.662795       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:41:00.862342       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:41:00.862404       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:41:00.923523       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:41:00.923881       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:41:00.928354       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:41:00.929746       1 config.go:200] "Starting service config controller"
	I1016 18:41:00.929813       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:41:00.929854       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:41:00.929894       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:41:00.929930       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:41:00.929983       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:41:00.932874       1 config.go:309] "Starting node config controller"
	I1016 18:41:00.933929       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:41:00.933985       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:41:01.035250       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 18:41:01.035287       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 18:41:01.035336       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5525af681fb786d6e55045dbd6699198c14af8151910d115c7563f1538d8235c] <==
	I1016 18:41:41.472510       1 serving.go:386] Generated self-signed cert in-memory
	I1016 18:41:43.577103       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1016 18:41:43.577172       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1016 18:41:43.577239       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1016 18:41:43.584276       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1016 18:41:43.584355       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I1016 18:41:43.584447       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1016 18:41:43.584472       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	E1016 18:41:43.584496       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="RequestHeaderAuthRequestController"
	I1016 18:41:43.584674       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 18:41:43.585363       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1016 18:41:43.584873       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:41:43.585448       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:41:43.584886       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 18:41:43.585544       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	E1016 18:41:43.585494       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1016 18:41:43.585582       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 18:41:43.584757       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1016 18:41:43.585697       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1016 18:41:43.585743       1 server.go:265] "[graceful-termination] secure server is exiting"
	I1016 18:41:43.585221       1 requestheader_controller.go:187] Shutting down RequestHeaderAuthRequestController
	E1016 18:41:43.585876       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bfa2ab16642d92709a304fefcc857584616f689a02ad07a51e19ad678eb1ae7e] <==
	I1016 18:42:01.663166       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:42:01.673264       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:42:01.673375       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:42:01.674612       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 18:42:01.689336       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1016 18:42:01.729998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1016 18:42:01.730177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 18:42:01.730245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 18:42:01.730302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 18:42:01.730361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 18:42:01.730444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 18:42:01.730492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 18:42:01.732461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 18:42:01.732541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 18:42:01.732604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:42:01.732671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 18:42:01.732724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 18:42:01.732797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 18:42:01.732856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 18:42:01.732929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 18:42:01.732988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 18:42:01.733074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 18:42:01.737294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 18:42:01.733149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1016 18:42:03.374499       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 18:49:53 functional-703623 kubelet[4124]: E1016 18:49:53.873492    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7z8qd" podUID="ba5572e0-0a3e-44f1-a0e9-ed16be6ab525"
	Oct 16 18:50:01 functional-703623 kubelet[4124]: E1016 18:50:01.873216    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7t72d" podUID="e85ae532-c9db-4bdc-ba0b-a5d371790556"
	Oct 16 18:50:06 functional-703623 kubelet[4124]: E1016 18:50:06.873993    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7z8qd" podUID="ba5572e0-0a3e-44f1-a0e9-ed16be6ab525"
	Oct 16 18:50:15 functional-703623 kubelet[4124]: E1016 18:50:15.873771    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7t72d" podUID="e85ae532-c9db-4bdc-ba0b-a5d371790556"
	Oct 16 18:50:18 functional-703623 kubelet[4124]: E1016 18:50:18.873303    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7z8qd" podUID="ba5572e0-0a3e-44f1-a0e9-ed16be6ab525"
	Oct 16 18:50:30 functional-703623 kubelet[4124]: E1016 18:50:30.873590    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7t72d" podUID="e85ae532-c9db-4bdc-ba0b-a5d371790556"
	Oct 16 18:50:32 functional-703623 kubelet[4124]: E1016 18:50:32.873791    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7z8qd" podUID="ba5572e0-0a3e-44f1-a0e9-ed16be6ab525"
	Oct 16 18:50:44 functional-703623 kubelet[4124]: E1016 18:50:44.873063    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7t72d" podUID="e85ae532-c9db-4bdc-ba0b-a5d371790556"
	Oct 16 18:50:47 functional-703623 kubelet[4124]: E1016 18:50:47.873679    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7z8qd" podUID="ba5572e0-0a3e-44f1-a0e9-ed16be6ab525"
	Oct 16 18:50:59 functional-703623 kubelet[4124]: E1016 18:50:59.873348    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7t72d" podUID="e85ae532-c9db-4bdc-ba0b-a5d371790556"
	Oct 16 18:51:00 functional-703623 kubelet[4124]: E1016 18:51:00.874611    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7z8qd" podUID="ba5572e0-0a3e-44f1-a0e9-ed16be6ab525"
	Oct 16 18:51:13 functional-703623 kubelet[4124]: E1016 18:51:13.873116    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7z8qd" podUID="ba5572e0-0a3e-44f1-a0e9-ed16be6ab525"
	Oct 16 18:51:14 functional-703623 kubelet[4124]: E1016 18:51:14.874927    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7t72d" podUID="e85ae532-c9db-4bdc-ba0b-a5d371790556"
	Oct 16 18:51:24 functional-703623 kubelet[4124]: E1016 18:51:24.874727    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7z8qd" podUID="ba5572e0-0a3e-44f1-a0e9-ed16be6ab525"
	Oct 16 18:51:29 functional-703623 kubelet[4124]: E1016 18:51:29.873511    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7t72d" podUID="e85ae532-c9db-4bdc-ba0b-a5d371790556"
	Oct 16 18:51:35 functional-703623 kubelet[4124]: E1016 18:51:35.873684    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7z8qd" podUID="ba5572e0-0a3e-44f1-a0e9-ed16be6ab525"
	Oct 16 18:51:42 functional-703623 kubelet[4124]: E1016 18:51:42.873819    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7t72d" podUID="e85ae532-c9db-4bdc-ba0b-a5d371790556"
	Oct 16 18:51:47 functional-703623 kubelet[4124]: E1016 18:51:47.873302    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7z8qd" podUID="ba5572e0-0a3e-44f1-a0e9-ed16be6ab525"
	Oct 16 18:51:56 functional-703623 kubelet[4124]: E1016 18:51:56.874543    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7t72d" podUID="e85ae532-c9db-4bdc-ba0b-a5d371790556"
	Oct 16 18:51:59 functional-703623 kubelet[4124]: E1016 18:51:59.873678    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7z8qd" podUID="ba5572e0-0a3e-44f1-a0e9-ed16be6ab525"
	Oct 16 18:52:10 functional-703623 kubelet[4124]: E1016 18:52:10.873393    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7z8qd" podUID="ba5572e0-0a3e-44f1-a0e9-ed16be6ab525"
	Oct 16 18:52:11 functional-703623 kubelet[4124]: E1016 18:52:11.873441    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7t72d" podUID="e85ae532-c9db-4bdc-ba0b-a5d371790556"
	Oct 16 18:52:21 functional-703623 kubelet[4124]: E1016 18:52:21.873639    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7z8qd" podUID="ba5572e0-0a3e-44f1-a0e9-ed16be6ab525"
	Oct 16 18:52:25 functional-703623 kubelet[4124]: E1016 18:52:25.872706    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7t72d" podUID="e85ae532-c9db-4bdc-ba0b-a5d371790556"
	Oct 16 18:52:35 functional-703623 kubelet[4124]: E1016 18:52:35.873325    4124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7z8qd" podUID="ba5572e0-0a3e-44f1-a0e9-ed16be6ab525"
	
	
	==> storage-provisioner [3b9eed011dc449212d3959361dd74c2464fc83ed024ea6f0bab1af94e9cbbb3a] <==
	I1016 18:42:02.229689       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1016 18:42:02.231093       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [dfdb95ab0e7c762cc89d73aa91a1a128832112632156f4e8c4710c6e35f7e2b1] <==
	W1016 18:52:13.212434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:15.215296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:15.220084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:17.222804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:17.227683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:19.234143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:19.242570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:21.245987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:21.251553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:23.254771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:23.261656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:25.265363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:25.271333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:27.275049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:27.281978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:29.285298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:29.289637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:31.292654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:31.297187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:33.300801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:33.308054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:35.310950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:35.315665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:37.319455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:52:37.324426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-703623 -n functional-703623
helpers_test.go:269: (dbg) Run:  kubectl --context functional-703623 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-7t72d hello-node-connect-7d85dfc575-7z8qd
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-703623 describe pod hello-node-75c85bcc94-7t72d hello-node-connect-7d85dfc575-7z8qd
helpers_test.go:290: (dbg) kubectl --context functional-703623 describe pod hello-node-75c85bcc94-7t72d hello-node-connect-7d85dfc575-7z8qd:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-7t72d
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-703623/192.168.49.2
	Start Time:       Thu, 16 Oct 2025 18:42:56 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qmhdv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qmhdv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m42s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7t72d to functional-703623
	  Normal   Pulling    6m45s (x5 over 9m42s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m45s (x5 over 9m42s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m45s (x5 over 9m42s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m37s (x21 over 9m41s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m37s (x21 over 9m41s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-7z8qd
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-703623/192.168.49.2
	Start Time:       Thu, 16 Oct 2025 18:42:35 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c4xp2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-c4xp2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-7z8qd to functional-703623
	  Normal   Pulling    7m2s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m2s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m2s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    3s (x43 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     3s (x43 over 10m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-703623 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-703623 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-7t72d" [e85ae532-c9db-4bdc-ba0b-a5d371790556] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1016 18:45:08.359851  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:45:36.070202  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:50:08.360180  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-703623 -n functional-703623
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-16 18:52:57.200634185 +0000 UTC m=+1329.033389177
functional_test.go:1460: (dbg) Run:  kubectl --context functional-703623 describe po hello-node-75c85bcc94-7t72d -n default
functional_test.go:1460: (dbg) kubectl --context functional-703623 describe po hello-node-75c85bcc94-7t72d -n default:
Name:             hello-node-75c85bcc94-7t72d
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-703623/192.168.49.2
Start Time:       Thu, 16 Oct 2025 18:42:56 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qmhdv (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qmhdv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7t72d to functional-703623
Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m56s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m56s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-703623 logs hello-node-75c85bcc94-7t72d -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-703623 logs hello-node-75c85bcc94-7t72d -n default: exit status 1 (100.880559ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-7t72d" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-703623 logs hello-node-75c85bcc94-7t72d -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-703623 service --namespace=default --https --url hello-node: exit status 115 (603.117758ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30354
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-703623 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-703623 service hello-node --url --format={{.IP}}: exit status 115 (446.355043ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-703623 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-703623 service hello-node --url: exit status 115 (463.561971ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30354
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-703623 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30354
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 image load --daemon kicbase/echo-server:functional-703623 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-703623" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 image load --daemon kicbase/echo-server:functional-703623 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-703623" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-703623
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 image load --daemon kicbase/echo-server:functional-703623 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-703623" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 image save kicbase/echo-server:functional-703623 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1016 18:53:06.896743  318374 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:53:06.897009  318374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:53:06.897038  318374 out.go:374] Setting ErrFile to fd 2...
	I1016 18:53:06.897056  318374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:53:06.897377  318374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:53:06.898055  318374 config.go:182] Loaded profile config "functional-703623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:53:06.898239  318374 config.go:182] Loaded profile config "functional-703623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:53:06.898783  318374 cli_runner.go:164] Run: docker container inspect functional-703623 --format={{.State.Status}}
	I1016 18:53:06.924456  318374 ssh_runner.go:195] Run: systemctl --version
	I1016 18:53:06.924514  318374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-703623
	I1016 18:53:06.947738  318374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/functional-703623/id_rsa Username:docker}
	I1016 18:53:07.057025  318374 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1016 18:53:07.057072  318374 cache_images.go:254] Failed to load cached images for "functional-703623": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1016 18:53:07.057090  318374 cache_images.go:266] failed pushing to: functional-703623

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-703623
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 image save --daemon kicbase/echo-server:functional-703623 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-703623
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-703623: exit status 1 (22.256224ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-703623

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-703623

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (537.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 stop --alsologtostderr -v 5
E1016 18:58:46.299993  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-556988 stop --alsologtostderr -v 5: (27.345808843s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 start --wait true --alsologtostderr -v 5
E1016 19:00:08.223031  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:00:08.359600  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:02:24.359697  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:02:52.064385  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:05:08.360085  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-556988 start --wait true --alsologtostderr -v 5: exit status 80 (8m27.219700105s)

                                                
                                                
-- stdout --
	* [ha-556988] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-556988" primary control-plane node in "ha-556988" cluster
	* Pulling base image v0.0.48-1760363564-21724 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Enabled addons: 
	
	* Starting "ha-556988-m02" control-plane node in "ha-556988" cluster
	* Pulling base image v0.0.48-1760363564-21724 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-556988-m03" control-plane node in "ha-556988" cluster
	* Pulling base image v0.0.48-1760363564-21724 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:58:51.718625  337340 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:58:51.718820  337340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:58:51.718832  337340 out.go:374] Setting ErrFile to fd 2...
	I1016 18:58:51.718837  337340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:58:51.719085  337340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:58:51.719452  337340 out.go:368] Setting JSON to false
	I1016 18:58:51.720287  337340 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6061,"bootTime":1760635071,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 18:58:51.720360  337340 start.go:141] virtualization:  
	I1016 18:58:51.723622  337340 out.go:179] * [ha-556988] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 18:58:51.727453  337340 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:58:51.727561  337340 notify.go:220] Checking for updates...
	I1016 18:58:51.733207  337340 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:58:51.736137  337340 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:58:51.738974  337340 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 18:58:51.741951  337340 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 18:58:51.744907  337340 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:58:51.748268  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:58:51.748399  337340 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:58:51.772958  337340 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 18:58:51.773087  337340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:58:51.833709  337340 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-16 18:58:51.824777239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:58:51.833825  337340 docker.go:318] overlay module found
	I1016 18:58:51.836939  337340 out.go:179] * Using the docker driver based on existing profile
	I1016 18:58:51.839798  337340 start.go:305] selected driver: docker
	I1016 18:58:51.839818  337340 start.go:925] validating driver "docker" against &{Name:ha-556988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:58:51.839961  337340 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:58:51.840070  337340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:58:51.894329  337340 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-16 18:58:51.884487993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:58:51.894716  337340 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:58:51.894754  337340 cni.go:84] Creating CNI manager for ""
	I1016 18:58:51.894821  337340 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1016 18:58:51.894871  337340 start.go:349] cluster config:
	{Name:ha-556988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:58:51.898184  337340 out.go:179] * Starting "ha-556988" primary control-plane node in "ha-556988" cluster
	I1016 18:58:51.901075  337340 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:58:51.904106  337340 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:58:51.906904  337340 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:58:51.906960  337340 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 18:58:51.906971  337340 cache.go:58] Caching tarball of preloaded images
	I1016 18:58:51.906995  337340 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:58:51.907065  337340 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 18:58:51.907074  337340 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:58:51.907213  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:58:51.927032  337340 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:58:51.927054  337340 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:58:51.927071  337340 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:58:51.927094  337340 start.go:360] acquireMachinesLock for ha-556988: {Name:mk71c3a6201989099f6bf114603feb8455c41f5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:58:51.927153  337340 start.go:364] duration metric: took 41.945µs to acquireMachinesLock for "ha-556988"
	I1016 18:58:51.927187  337340 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:58:51.927198  337340 fix.go:54] fixHost starting: 
	I1016 18:58:51.927452  337340 cli_runner.go:164] Run: docker container inspect ha-556988 --format={{.State.Status}}
	I1016 18:58:51.944496  337340 fix.go:112] recreateIfNeeded on ha-556988: state=Stopped err=<nil>
	W1016 18:58:51.944531  337340 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:58:51.947809  337340 out.go:252] * Restarting existing docker container for "ha-556988" ...
	I1016 18:58:51.947886  337340 cli_runner.go:164] Run: docker start ha-556988
	I1016 18:58:52.211064  337340 cli_runner.go:164] Run: docker container inspect ha-556988 --format={{.State.Status}}
	I1016 18:58:52.238130  337340 kic.go:430] container "ha-556988" state is running.
	I1016 18:58:52.238496  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988
	I1016 18:58:52.265254  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:58:52.265525  337340 machine.go:93] provisionDockerMachine start ...
	I1016 18:58:52.265595  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:52.289105  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:58:52.289561  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1016 18:58:52.289576  337340 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:58:52.290191  337340 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 18:58:55.440597  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988
	
	I1016 18:58:55.440631  337340 ubuntu.go:182] provisioning hostname "ha-556988"
	I1016 18:58:55.440701  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:55.458200  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:58:55.458510  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1016 18:58:55.458528  337340 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-556988 && echo "ha-556988" | sudo tee /etc/hostname
	I1016 18:58:55.615084  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988
	
	I1016 18:58:55.615165  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:55.633608  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:58:55.633925  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1016 18:58:55.633950  337340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-556988' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-556988/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-556988' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:58:55.781429  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:58:55.781454  337340 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 18:58:55.781481  337340 ubuntu.go:190] setting up certificates
	I1016 18:58:55.781490  337340 provision.go:84] configureAuth start
	I1016 18:58:55.781555  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988
	I1016 18:58:55.798617  337340 provision.go:143] copyHostCerts
	I1016 18:58:55.798664  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:58:55.798709  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 18:58:55.798730  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:58:55.798812  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 18:58:55.798915  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:58:55.798938  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 18:58:55.798949  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:58:55.798989  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 18:58:55.799046  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:58:55.799068  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 18:58:55.799078  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:58:55.799112  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 18:58:55.799198  337340 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.ha-556988 san=[127.0.0.1 192.168.49.2 ha-556988 localhost minikube]
	I1016 18:58:56.377628  337340 provision.go:177] copyRemoteCerts
	I1016 18:58:56.377703  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:58:56.377743  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:56.397097  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:58:56.500593  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1016 18:58:56.500663  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:58:56.518370  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1016 18:58:56.518433  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 18:58:56.536547  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1016 18:58:56.536628  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1016 18:58:56.555074  337340 provision.go:87] duration metric: took 773.569729ms to configureAuth
	I1016 18:58:56.555099  337340 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:58:56.555326  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:58:56.555445  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:56.572643  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:58:56.572965  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1016 18:58:56.572986  337340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:58:56.890339  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:58:56.890428  337340 machine.go:96] duration metric: took 4.624892872s to provisionDockerMachine
	I1016 18:58:56.890454  337340 start.go:293] postStartSetup for "ha-556988" (driver="docker")
	I1016 18:58:56.890480  337340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:58:56.890607  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:58:56.890683  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:56.913382  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:58:57.017075  337340 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:58:57.021857  337340 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:58:57.021887  337340 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:58:57.021899  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 18:58:57.021965  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 18:58:57.022045  337340 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 18:58:57.022052  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /etc/ssl/certs/2903122.pem
	I1016 18:58:57.022160  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:58:57.030852  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:58:57.048968  337340 start.go:296] duration metric: took 158.482858ms for postStartSetup
	I1016 18:58:57.049157  337340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:58:57.049222  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:57.066845  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:58:57.166118  337340 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:58:57.170752  337340 fix.go:56] duration metric: took 5.243547354s for fixHost
	I1016 18:58:57.170779  337340 start.go:83] releasing machines lock for "ha-556988", held for 5.243610027s
	I1016 18:58:57.170862  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988
	I1016 18:58:57.187672  337340 ssh_runner.go:195] Run: cat /version.json
	I1016 18:58:57.187699  337340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:58:57.187723  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:57.187757  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:57.206208  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:58:57.213346  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:58:57.391366  337340 ssh_runner.go:195] Run: systemctl --version
	I1016 18:58:57.397910  337340 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:58:57.434230  337340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:58:57.439686  337340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:58:57.439757  337340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:58:57.447828  337340 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:58:57.447851  337340 start.go:495] detecting cgroup driver to use...
	I1016 18:58:57.447886  337340 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 18:58:57.447952  337340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:58:57.463944  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:58:57.477406  337340 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:58:57.477468  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:58:57.493693  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:58:57.507255  337340 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:58:57.614114  337340 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:58:57.729976  337340 docker.go:234] disabling docker service ...
	I1016 18:58:57.730050  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:58:57.745940  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:58:57.758869  337340 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:58:57.875693  337340 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:58:57.984271  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:58:57.997324  337340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:58:58.012287  337340 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:58:58.012387  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.023645  337340 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 18:58:58.023740  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.036244  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.046489  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.055569  337340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:58:58.065264  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.075123  337340 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.084654  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.094603  337340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:58:58.102554  337340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:58:58.110013  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:58:58.218071  337340 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:58:58.347916  337340 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:58:58.348026  337340 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:58:58.351852  337340 start.go:563] Will wait 60s for crictl version
	I1016 18:58:58.351953  337340 ssh_runner.go:195] Run: which crictl
	I1016 18:58:58.355554  337340 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:58:58.382893  337340 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:58:58.383032  337340 ssh_runner.go:195] Run: crio --version
	I1016 18:58:58.410837  337340 ssh_runner.go:195] Run: crio --version
	I1016 18:58:58.446345  337340 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:58:58.449238  337340 cli_runner.go:164] Run: docker network inspect ha-556988 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:58:58.465498  337340 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1016 18:58:58.469406  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:58:58.479415  337340 kubeadm.go:883] updating cluster {Name:ha-556988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:58:58.479566  337340 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:58:58.479620  337340 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:58:58.516159  337340 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:58:58.516181  337340 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:58:58.516239  337340 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:58:58.543999  337340 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:58:58.544030  337340 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:58:58.544040  337340 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1016 18:58:58.544140  337340 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-556988 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:58:58.544225  337340 ssh_runner.go:195] Run: crio config
	I1016 18:58:58.618937  337340 cni.go:84] Creating CNI manager for ""
	I1016 18:58:58.618957  337340 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1016 18:58:58.618981  337340 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:58:58.619008  337340 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-556988 NodeName:ha-556988 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:58:58.619133  337340 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-556988"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:58:58.619160  337340 kube-vip.go:115] generating kube-vip config ...
	I1016 18:58:58.619222  337340 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1016 18:58:58.631579  337340 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:58:58.631697  337340 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1016 18:58:58.631769  337340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:58:58.640083  337340 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:58:58.640188  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1016 18:58:58.648089  337340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1016 18:58:58.661375  337340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:58:58.674583  337340 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1016 18:58:58.687345  337340 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1016 18:58:58.700772  337340 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1016 18:58:58.704503  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:58:58.714276  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:58:58.833486  337340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:58:58.851263  337340 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988 for IP: 192.168.49.2
	I1016 18:58:58.851288  337340 certs.go:195] generating shared ca certs ...
	I1016 18:58:58.851306  337340 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:58:58.851471  337340 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 18:58:58.851524  337340 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 18:58:58.851537  337340 certs.go:257] generating profile certs ...
	I1016 18:58:58.851633  337340 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key
	I1016 18:58:58.851666  337340 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.1de6c797
	I1016 18:58:58.851690  337340 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt.1de6c797 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1016 18:58:59.152876  337340 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt.1de6c797 ...
	I1016 18:58:59.152960  337340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt.1de6c797: {Name:mk3d22e55d5c37c04716dc4d1ee3cbc4538fbdc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:58:59.153223  337340 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.1de6c797 ...
	I1016 18:58:59.153265  337340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.1de6c797: {Name:mkda3eb1676258b3c7a46448934b59023d353a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:58:59.153432  337340 certs.go:382] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt.1de6c797 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt
	I1016 18:58:59.153636  337340 certs.go:386] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.1de6c797 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key
	I1016 18:58:59.153853  337340 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key
	I1016 18:58:59.153891  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1016 18:58:59.153923  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1016 18:58:59.153965  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1016 18:58:59.153998  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1016 18:58:59.154028  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1016 18:58:59.154076  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1016 18:58:59.154112  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1016 18:58:59.154143  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1016 18:58:59.154239  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 18:58:59.154300  337340 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 18:58:59.154325  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 18:58:59.154381  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 18:58:59.154435  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:58:59.154491  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 18:58:59.154609  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:58:59.154690  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:58:59.154737  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem -> /usr/share/ca-certificates/290312.pem
	I1016 18:58:59.154771  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /usr/share/ca-certificates/2903122.pem
	I1016 18:58:59.155500  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:58:59.174654  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 18:58:59.194053  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:58:59.220036  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 18:58:59.241089  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1016 18:58:59.259308  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 18:58:59.276555  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:58:59.293855  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:58:59.311467  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:58:59.329708  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 18:58:59.347304  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 18:58:59.364602  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:58:59.377635  337340 ssh_runner.go:195] Run: openssl version
	I1016 18:58:59.384255  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 18:58:59.393733  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 18:58:59.397737  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 18:58:59.397824  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 18:58:59.438696  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:58:59.446893  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:58:59.455572  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:58:59.459600  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:58:59.459668  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:58:59.500823  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:58:59.509003  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 18:58:59.520724  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 18:58:59.528394  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 18:58:59.528467  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 18:58:59.578056  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 18:58:59.586838  337340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:58:59.594144  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:58:59.638647  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:58:59.694080  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:58:59.765575  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:58:59.865472  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:58:59.931581  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:58:59.986682  337340 kubeadm.go:400] StartCluster: {Name:ha-556988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:58:59.986889  337340 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:58:59.986987  337340 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:59:00.020883  337340 cri.go:89] found id: "a6a97464c4b58734820a4c747fbaa58980bfcb3cdc5b94d0a49804bd9ecaf2d2"
	I1016 18:59:00.020964  337340 cri.go:89] found id: "37de0677d02917c07b70727749f73f2b0b33bfa000e9e137a54da309d14e7ae7"
	I1016 18:59:00.020984  337340 cri.go:89] found id: "13005c03c7e831233e329dc3df5f63331cf23a4ab71c78d67d200baaff30b9bf"
	I1016 18:59:00.021007  337340 cri.go:89] found id: "ccd1663977e230bbda3cae69e035a19bb725c3f88efd4340e2acdb82e35b17b4"
	I1016 18:59:00.021041  337340 cri.go:89] found id: "0947527fb7c6600575f80d864636e177c1330efa7ab3caff116116cd0d07fe91"
	I1016 18:59:00.021071  337340 cri.go:89] found id: ""
	I1016 18:59:00.021222  337340 ssh_runner.go:195] Run: sudo runc list -f json
	W1016 18:59:00.048970  337340 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:59:00Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:59:00.049191  337340 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:59:00.064913  337340 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 18:59:00.065020  337340 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 18:59:00.065128  337340 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 18:59:00.081513  337340 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:59:00.082142  337340 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-556988" does not appear in /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:59:00.082376  337340 kubeconfig.go:62] /home/jenkins/minikube-integration/21738-288457/kubeconfig needs updating (will repair): [kubeconfig missing "ha-556988" cluster setting kubeconfig missing "ha-556988" context setting]
	I1016 18:59:00.082852  337340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:59:00.083778  337340 kapi.go:59] client config for ha-556988: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key", CAFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1016 18:59:00.084642  337340 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1016 18:59:00.084775  337340 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1016 18:59:00.084800  337340 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1016 18:59:00.084835  337340 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1016 18:59:00.084861  337340 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1016 18:59:00.084885  337340 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1016 18:59:00.085481  337340 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 18:59:00.133777  337340 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1016 18:59:00.133865  337340 kubeadm.go:601] duration metric: took 68.819342ms to restartPrimaryControlPlane
	I1016 18:59:00.133892  337340 kubeadm.go:402] duration metric: took 147.219085ms to StartCluster
	I1016 18:59:00.133962  337340 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:59:00.134087  337340 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:59:00.134991  337340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:59:00.135381  337340 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:59:00.135451  337340 start.go:241] waiting for startup goroutines ...
	I1016 18:59:00.135503  337340 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:59:00.136478  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:00.165207  337340 out.go:179] * Enabled addons: 
	I1016 18:59:00.168421  337340 addons.go:514] duration metric: took 32.907014ms for enable addons: enabled=[]
	I1016 18:59:00.168517  337340 start.go:246] waiting for cluster config update ...
	I1016 18:59:00.168542  337340 start.go:255] writing updated cluster config ...
	I1016 18:59:00.191362  337340 out.go:203] 
	I1016 18:59:00.209821  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:00.209961  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:00.213495  337340 out.go:179] * Starting "ha-556988-m02" control-plane node in "ha-556988" cluster
	I1016 18:59:00.216452  337340 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:59:00.223747  337340 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:59:00.226672  337340 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:59:00.226714  337340 cache.go:58] Caching tarball of preloaded images
	I1016 18:59:00.226842  337340 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 18:59:00.226852  337340 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:59:00.227106  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:00.227394  337340 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:59:00.266622  337340 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:59:00.266645  337340 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:59:00.266659  337340 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:59:00.266685  337340 start.go:360] acquireMachinesLock for ha-556988-m02: {Name:mkb742ea24d411e97f6bd75961598d91ba358bd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:59:00.266743  337340 start.go:364] duration metric: took 41.445µs to acquireMachinesLock for "ha-556988-m02"
	I1016 18:59:00.266766  337340 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:59:00.266772  337340 fix.go:54] fixHost starting: m02
	I1016 18:59:00.267061  337340 cli_runner.go:164] Run: docker container inspect ha-556988-m02 --format={{.State.Status}}
	I1016 18:59:00.297319  337340 fix.go:112] recreateIfNeeded on ha-556988-m02: state=Stopped err=<nil>
	W1016 18:59:00.297360  337340 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:59:00.300819  337340 out.go:252] * Restarting existing docker container for "ha-556988-m02" ...
	I1016 18:59:00.300940  337340 cli_runner.go:164] Run: docker start ha-556988-m02
	I1016 18:59:00.708144  337340 cli_runner.go:164] Run: docker container inspect ha-556988-m02 --format={{.State.Status}}
	I1016 18:59:00.733543  337340 kic.go:430] container "ha-556988-m02" state is running.
	I1016 18:59:00.733902  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m02
	I1016 18:59:00.760804  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:00.761309  337340 machine.go:93] provisionDockerMachine start ...
	I1016 18:59:00.761403  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:00.808146  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:00.808685  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1016 18:59:00.808701  337340 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:59:00.809303  337340 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40522->127.0.0.1:33183: read: connection reset by peer
	I1016 18:59:04.034070  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988-m02
	
	I1016 18:59:04.034139  337340 ubuntu.go:182] provisioning hostname "ha-556988-m02"
	I1016 18:59:04.034243  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:04.063655  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:04.063975  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1016 18:59:04.063993  337340 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-556988-m02 && echo "ha-556988-m02" | sudo tee /etc/hostname
	I1016 18:59:04.267030  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988-m02
	
	I1016 18:59:04.267113  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:04.300780  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:04.301103  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1016 18:59:04.301127  337340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-556988-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-556988-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-556988-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:59:04.469711  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:59:04.469796  337340 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 18:59:04.469828  337340 ubuntu.go:190] setting up certificates
	I1016 18:59:04.469864  337340 provision.go:84] configureAuth start
	I1016 18:59:04.469974  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m02
	I1016 18:59:04.508993  337340 provision.go:143] copyHostCerts
	I1016 18:59:04.509035  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:59:04.509067  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 18:59:04.509074  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:59:04.509305  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 18:59:04.509422  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:59:04.509441  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 18:59:04.509446  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:59:04.509496  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 18:59:04.509545  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:59:04.509562  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 18:59:04.509566  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:59:04.509591  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 18:59:04.509649  337340 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.ha-556988-m02 san=[127.0.0.1 192.168.49.3 ha-556988-m02 localhost minikube]
	I1016 18:59:05.303068  337340 provision.go:177] copyRemoteCerts
	I1016 18:59:05.303142  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:59:05.303195  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:05.322174  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 18:59:05.428054  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1016 18:59:05.428132  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:59:05.461825  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1016 18:59:05.461888  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 18:59:05.487317  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1016 18:59:05.487378  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1016 18:59:05.516798  337340 provision.go:87] duration metric: took 1.046901762s to configureAuth
	I1016 18:59:05.516822  337340 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:59:05.517061  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:05.517252  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:05.546833  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:05.547150  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1016 18:59:05.547168  337340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:59:05.937754  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:59:05.937782  337340 machine.go:96] duration metric: took 5.176458229s to provisionDockerMachine
	I1016 18:59:05.937802  337340 start.go:293] postStartSetup for "ha-556988-m02" (driver="docker")
	I1016 18:59:05.937814  337340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:59:05.937890  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:59:05.937937  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:05.955324  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 18:59:06.057291  337340 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:59:06.060623  337340 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:59:06.060656  337340 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:59:06.060668  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 18:59:06.060728  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 18:59:06.060812  337340 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 18:59:06.060824  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /etc/ssl/certs/2903122.pem
	I1016 18:59:06.060930  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:59:06.068899  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:59:06.087392  337340 start.go:296] duration metric: took 149.572621ms for postStartSetup
	I1016 18:59:06.087476  337340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:59:06.087533  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:06.109477  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 18:59:06.222886  337340 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:59:06.229852  337340 fix.go:56] duration metric: took 5.963072953s for fixHost
	I1016 18:59:06.229883  337340 start.go:83] releasing machines lock for "ha-556988-m02", held for 5.963130679s
	I1016 18:59:06.229963  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m02
	I1016 18:59:06.266689  337340 out.go:179] * Found network options:
	I1016 18:59:06.273332  337340 out.go:179]   - NO_PROXY=192.168.49.2
	W1016 18:59:06.276561  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	W1016 18:59:06.276606  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	I1016 18:59:06.276683  337340 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:59:06.276749  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:06.276754  337340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:59:06.276816  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:06.317825  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 18:59:06.323025  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 18:59:06.671873  337340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:59:06.677594  337340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:59:06.677732  337340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:59:06.690261  337340 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:59:06.690335  337340 start.go:495] detecting cgroup driver to use...
	I1016 18:59:06.690384  337340 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 18:59:06.690471  337340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:59:06.714650  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:59:06.733867  337340 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:59:06.733929  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:59:06.752522  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:59:06.775910  337340 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:59:06.992043  337340 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:59:07.227541  337340 docker.go:234] disabling docker service ...
	I1016 18:59:07.227607  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:59:07.250512  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:59:07.276078  337340 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:59:07.484122  337340 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:59:07.729089  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:59:07.767438  337340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:59:07.809637  337340 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:59:07.809753  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.832720  337340 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 18:59:07.832842  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.859881  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.889284  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.901694  337340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:59:07.922354  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.941649  337340 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.951572  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.961513  337340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:59:07.970666  337340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:59:07.978742  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:59:08.323908  337340 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:59:09.667321  337340 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.343330778s)
	I1016 18:59:09.667346  337340 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:59:09.667400  337340 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:59:09.677469  337340 start.go:563] Will wait 60s for crictl version
	I1016 18:59:09.677549  337340 ssh_runner.go:195] Run: which crictl
	I1016 18:59:09.683697  337340 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:59:09.731470  337340 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:59:09.731621  337340 ssh_runner.go:195] Run: crio --version
	I1016 18:59:09.782976  337340 ssh_runner.go:195] Run: crio --version
	I1016 18:59:09.844144  337340 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:59:09.847254  337340 out.go:179]   - env NO_PROXY=192.168.49.2
	I1016 18:59:09.850158  337340 cli_runner.go:164] Run: docker network inspect ha-556988 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:59:09.881787  337340 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1016 18:59:09.886123  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:59:09.903709  337340 mustload.go:65] Loading cluster: ha-556988
	I1016 18:59:09.903953  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:09.904211  337340 cli_runner.go:164] Run: docker container inspect ha-556988 --format={{.State.Status}}
	I1016 18:59:09.944289  337340 host.go:66] Checking if "ha-556988" exists ...
	I1016 18:59:09.944603  337340 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988 for IP: 192.168.49.3
	I1016 18:59:09.944620  337340 certs.go:195] generating shared ca certs ...
	I1016 18:59:09.944638  337340 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:59:09.944779  337340 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 18:59:09.944832  337340 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 18:59:09.944844  337340 certs.go:257] generating profile certs ...
	I1016 18:59:09.944939  337340 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key
	I1016 18:59:09.945027  337340 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.2ae973c7
	I1016 18:59:09.945079  337340 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key
	I1016 18:59:09.945092  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1016 18:59:09.945106  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1016 18:59:09.945127  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1016 18:59:09.945166  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1016 18:59:09.945182  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1016 18:59:09.945202  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1016 18:59:09.945213  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1016 18:59:09.945233  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1016 18:59:09.945291  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 18:59:09.945327  337340 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 18:59:09.945341  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 18:59:09.945370  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 18:59:09.945403  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:59:09.945429  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 18:59:09.945482  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:59:09.945516  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:59:09.945534  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem -> /usr/share/ca-certificates/290312.pem
	I1016 18:59:09.945549  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /usr/share/ca-certificates/2903122.pem
	I1016 18:59:09.945612  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:59:09.972941  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:59:10.097521  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1016 18:59:10.102513  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1016 18:59:10.114147  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1016 18:59:10.119117  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1016 18:59:10.130126  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1016 18:59:10.134419  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1016 18:59:10.144627  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1016 18:59:10.148520  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1016 18:59:10.157921  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1016 18:59:10.161674  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1016 18:59:10.171535  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1016 18:59:10.175229  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1016 18:59:10.184604  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:59:10.206415  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 18:59:10.228102  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:59:10.258566  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 18:59:10.283952  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1016 18:59:10.306580  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 18:59:10.329415  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:59:10.348969  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:59:10.368321  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:59:10.387180  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 18:59:10.408929  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 18:59:10.429114  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1016 18:59:10.444245  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1016 18:59:10.458197  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1016 18:59:10.472176  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1016 18:59:10.485882  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1016 18:59:10.499848  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1016 18:59:10.515126  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1016 18:59:10.528667  337340 ssh_runner.go:195] Run: openssl version
	I1016 18:59:10.535446  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:59:10.544186  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:59:10.548237  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:59:10.548342  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:59:10.591605  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:59:10.600300  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 18:59:10.608985  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 18:59:10.612817  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 18:59:10.612923  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 18:59:10.655658  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 18:59:10.664193  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 18:59:10.673263  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 18:59:10.677209  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 18:59:10.677288  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 18:59:10.718855  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:59:10.726829  337340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:59:10.730876  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:59:10.773328  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:59:10.815232  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:59:10.858016  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:59:10.899603  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:59:10.942507  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:59:10.988343  337340 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1016 18:59:10.988480  337340 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-556988-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:59:10.988535  337340 kube-vip.go:115] generating kube-vip config ...
	I1016 18:59:10.988601  337340 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1016 18:59:11.002298  337340 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:59:11.002415  337340 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1016 18:59:11.002494  337340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:59:11.011536  337340 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:59:11.011651  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1016 18:59:11.021905  337340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1016 18:59:11.037889  337340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:59:11.051536  337340 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1016 18:59:11.069953  337340 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1016 18:59:11.074152  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:59:11.086164  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:59:11.252847  337340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:59:11.266706  337340 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:59:11.267048  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:11.273634  337340 out.go:179] * Verifying Kubernetes components...
	I1016 18:59:11.276480  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:59:11.421023  337340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:59:11.436654  337340 kapi.go:59] client config for ha-556988: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key", CAFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1016 18:59:11.436746  337340 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1016 18:59:11.437099  337340 node_ready.go:35] waiting up to 6m0s for node "ha-556988-m02" to be "Ready" ...
	I1016 18:59:34.862749  337340 node_ready.go:49] node "ha-556988-m02" is "Ready"
	I1016 18:59:34.862783  337340 node_ready.go:38] duration metric: took 23.425601966s for node "ha-556988-m02" to be "Ready" ...
	I1016 18:59:34.862797  337340 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:59:34.862859  337340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:59:34.885329  337340 api_server.go:72] duration metric: took 23.618240686s to wait for apiserver process to appear ...
	I1016 18:59:34.885358  337340 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:59:34.885377  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:34.897604  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:34.897640  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:35.386323  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:35.400088  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:35.400123  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:35.885493  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:35.987319  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:35.987359  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:36.385456  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:36.412352  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:36.412390  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:36.885906  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:36.906763  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:36.906805  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:37.386256  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:37.404132  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:37.404163  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:37.885488  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:37.894320  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:37.894358  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:38.385493  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:38.394925  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1016 18:59:38.395973  337340 api_server.go:141] control plane version: v1.34.1
	I1016 18:59:38.396011  337340 api_server.go:131] duration metric: took 3.51063495s to wait for apiserver health ...
	I1016 18:59:38.396021  337340 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:59:38.401864  337340 system_pods.go:59] 26 kube-system pods found
	I1016 18:59:38.401911  337340 system_pods.go:61] "coredns-66bc5c9577-bg5gf" [e74de9d2-b737-42ff-8b64-feac035b2a70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:59:38.401923  337340 system_pods.go:61] "coredns-66bc5c9577-qnwbz" [774c649b-c0e4-4cdb-b2e8-cf72f5904899] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:59:38.401929  337340 system_pods.go:61] "etcd-ha-556988" [3e9c14ad-eae5-477f-b7c0-9dcdaf895b65] Running
	I1016 18:59:38.401935  337340 system_pods.go:61] "etcd-ha-556988-m02" [3f391bcc-813d-4db1-9aaa-258f230517fc] Running
	I1016 18:59:38.401940  337340 system_pods.go:61] "etcd-ha-556988-m03" [ea908ff8-f137-460f-9bf4-17345b1c9a66] Running
	I1016 18:59:38.401952  337340 system_pods.go:61] "kindnet-9mrmf" [45836450-4eac-49b9-a0cf-8d5a07061558] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1016 18:59:38.401957  337340 system_pods.go:61] "kindnet-c5vhh" [aadf11dc-a51d-4828-9ae1-0295e92d1c95] Running
	I1016 18:59:38.401968  337340 system_pods.go:61] "kindnet-flq9x" [aea5627f-11fc-4f3a-a968-1ca5c98d36b5] Running
	I1016 18:59:38.401972  337340 system_pods.go:61] "kindnet-qj4cl" [ef19450a-7ec3-4ccf-a5e9-c7937fd3339d] Running
	I1016 18:59:38.401979  337340 system_pods.go:61] "kube-apiserver-ha-556988" [24a555d8-f3f0-4b1c-b576-6ca1aff25a54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:59:38.401988  337340 system_pods.go:61] "kube-apiserver-ha-556988-m02" [1fc44835-ea0a-40c3-8042-f1b7e4c5c317] Running
	I1016 18:59:38.401994  337340 system_pods.go:61] "kube-apiserver-ha-556988-m03" [4c29b8ab-29b7-4dbb-8c29-18837ac4113e] Running
	I1016 18:59:38.402001  337340 system_pods.go:61] "kube-controller-manager-ha-556988" [cc4765f2-5a4b-44ce-b5da-77313d0027c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:59:38.402018  337340 system_pods.go:61] "kube-controller-manager-ha-556988-m02" [5a169a8b-1028-4629-a4b9-9cad3c765757] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:59:38.402024  337340 system_pods.go:61] "kube-controller-manager-ha-556988-m03" [ec16f7f4-acee-4d97-8cf3-20c0f326b08b] Running
	I1016 18:59:38.402030  337340 system_pods.go:61] "kube-proxy-2j2kg" [26525910-8639-4ca0-a113-d428683bd112] Running
	I1016 18:59:38.402037  337340 system_pods.go:61] "kube-proxy-dqhtm" [eee1ee0e-f145-4298-afe6-1ca41a084680] Running
	I1016 18:59:38.402041  337340 system_pods.go:61] "kube-proxy-l2lf6" [b32400f6-5ec6-4a22-87fc-4b9fb8b25976] Running
	I1016 18:59:38.402049  337340 system_pods.go:61] "kube-proxy-mx9hc" [64ee00b3-06f0-4db8-91a2-cb2bb4b25b64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1016 18:59:38.402060  337340 system_pods.go:61] "kube-scheduler-ha-556988" [37cb1ddb-9782-4e54-9793-8f2a07fe78e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:59:38.402068  337340 system_pods.go:61] "kube-scheduler-ha-556988-m02" [d819d0c4-766f-44c5-8bb9-b8f35e3d8d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:59:38.402073  337340 system_pods.go:61] "kube-scheduler-ha-556988-m03" [33286dd3-5abd-484d-abbb-8cb29c08d3ee] Running
	I1016 18:59:38.402077  337340 system_pods.go:61] "kube-vip-ha-556988" [0c7ea0da-ea3e-4fff-a76c-98b473255af9] Running
	I1016 18:59:38.402081  337340 system_pods.go:61] "kube-vip-ha-556988-m02" [850d312a-8987-4b0f-bb9e-a393a24d9b49] Running
	I1016 18:59:38.402085  337340 system_pods.go:61] "kube-vip-ha-556988-m03" [85c7549d-c836-473b-916a-e4091d8daaa4] Running
	I1016 18:59:38.402089  337340 system_pods.go:61] "storage-provisioner" [916b69a5-8ee0-43ee-87fd-9a88caebbec8] Running
	I1016 18:59:38.402095  337340 system_pods.go:74] duration metric: took 6.067311ms to wait for pod list to return data ...
	I1016 18:59:38.402109  337340 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:59:38.406892  337340 default_sa.go:45] found service account: "default"
	I1016 18:59:38.406919  337340 default_sa.go:55] duration metric: took 4.803341ms for default service account to be created ...
	I1016 18:59:38.406930  337340 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 18:59:38.413271  337340 system_pods.go:86] 26 kube-system pods found
	I1016 18:59:38.413316  337340 system_pods.go:89] "coredns-66bc5c9577-bg5gf" [e74de9d2-b737-42ff-8b64-feac035b2a70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:59:38.413326  337340 system_pods.go:89] "coredns-66bc5c9577-qnwbz" [774c649b-c0e4-4cdb-b2e8-cf72f5904899] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:59:38.413332  337340 system_pods.go:89] "etcd-ha-556988" [3e9c14ad-eae5-477f-b7c0-9dcdaf895b65] Running
	I1016 18:59:38.413337  337340 system_pods.go:89] "etcd-ha-556988-m02" [3f391bcc-813d-4db1-9aaa-258f230517fc] Running
	I1016 18:59:38.413343  337340 system_pods.go:89] "etcd-ha-556988-m03" [ea908ff8-f137-460f-9bf4-17345b1c9a66] Running
	I1016 18:59:38.413350  337340 system_pods.go:89] "kindnet-9mrmf" [45836450-4eac-49b9-a0cf-8d5a07061558] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1016 18:59:38.413355  337340 system_pods.go:89] "kindnet-c5vhh" [aadf11dc-a51d-4828-9ae1-0295e92d1c95] Running
	I1016 18:59:38.413367  337340 system_pods.go:89] "kindnet-flq9x" [aea5627f-11fc-4f3a-a968-1ca5c98d36b5] Running
	I1016 18:59:38.413379  337340 system_pods.go:89] "kindnet-qj4cl" [ef19450a-7ec3-4ccf-a5e9-c7937fd3339d] Running
	I1016 18:59:38.413390  337340 system_pods.go:89] "kube-apiserver-ha-556988" [24a555d8-f3f0-4b1c-b576-6ca1aff25a54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:59:38.413396  337340 system_pods.go:89] "kube-apiserver-ha-556988-m02" [1fc44835-ea0a-40c3-8042-f1b7e4c5c317] Running
	I1016 18:59:38.413406  337340 system_pods.go:89] "kube-apiserver-ha-556988-m03" [4c29b8ab-29b7-4dbb-8c29-18837ac4113e] Running
	I1016 18:59:38.413413  337340 system_pods.go:89] "kube-controller-manager-ha-556988" [cc4765f2-5a4b-44ce-b5da-77313d0027c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:59:38.413425  337340 system_pods.go:89] "kube-controller-manager-ha-556988-m02" [5a169a8b-1028-4629-a4b9-9cad3c765757] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:59:38.413430  337340 system_pods.go:89] "kube-controller-manager-ha-556988-m03" [ec16f7f4-acee-4d97-8cf3-20c0f326b08b] Running
	I1016 18:59:38.413435  337340 system_pods.go:89] "kube-proxy-2j2kg" [26525910-8639-4ca0-a113-d428683bd112] Running
	I1016 18:59:38.413440  337340 system_pods.go:89] "kube-proxy-dqhtm" [eee1ee0e-f145-4298-afe6-1ca41a084680] Running
	I1016 18:59:38.413444  337340 system_pods.go:89] "kube-proxy-l2lf6" [b32400f6-5ec6-4a22-87fc-4b9fb8b25976] Running
	I1016 18:59:38.413456  337340 system_pods.go:89] "kube-proxy-mx9hc" [64ee00b3-06f0-4db8-91a2-cb2bb4b25b64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1016 18:59:38.413467  337340 system_pods.go:89] "kube-scheduler-ha-556988" [37cb1ddb-9782-4e54-9793-8f2a07fe78e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:59:38.413474  337340 system_pods.go:89] "kube-scheduler-ha-556988-m02" [d819d0c4-766f-44c5-8bb9-b8f35e3d8d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:59:38.413486  337340 system_pods.go:89] "kube-scheduler-ha-556988-m03" [33286dd3-5abd-484d-abbb-8cb29c08d3ee] Running
	I1016 18:59:38.413491  337340 system_pods.go:89] "kube-vip-ha-556988" [0c7ea0da-ea3e-4fff-a76c-98b473255af9] Running
	I1016 18:59:38.413495  337340 system_pods.go:89] "kube-vip-ha-556988-m02" [850d312a-8987-4b0f-bb9e-a393a24d9b49] Running
	I1016 18:59:38.413498  337340 system_pods.go:89] "kube-vip-ha-556988-m03" [85c7549d-c836-473b-916a-e4091d8daaa4] Running
	I1016 18:59:38.413502  337340 system_pods.go:89] "storage-provisioner" [916b69a5-8ee0-43ee-87fd-9a88caebbec8] Running
	I1016 18:59:38.413515  337340 system_pods.go:126] duration metric: took 6.570484ms to wait for k8s-apps to be running ...
	I1016 18:59:38.413533  337340 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 18:59:38.413612  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:59:38.430123  337340 system_svc.go:56] duration metric: took 16.57935ms WaitForService to wait for kubelet
	I1016 18:59:38.430164  337340 kubeadm.go:586] duration metric: took 27.163079108s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:59:38.430184  337340 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:59:38.453899  337340 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 18:59:38.453938  337340 node_conditions.go:123] node cpu capacity is 2
	I1016 18:59:38.453950  337340 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 18:59:38.453964  337340 node_conditions.go:123] node cpu capacity is 2
	I1016 18:59:38.453969  337340 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 18:59:38.453977  337340 node_conditions.go:123] node cpu capacity is 2
	I1016 18:59:38.453981  337340 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 18:59:38.453986  337340 node_conditions.go:123] node cpu capacity is 2
	I1016 18:59:38.453993  337340 node_conditions.go:105] duration metric: took 23.803362ms to run NodePressure ...
	I1016 18:59:38.454005  337340 start.go:241] waiting for startup goroutines ...
	I1016 18:59:38.454041  337340 start.go:255] writing updated cluster config ...
	I1016 18:59:38.457719  337340 out.go:203] 
	I1016 18:59:38.460987  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:38.461187  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:38.464790  337340 out.go:179] * Starting "ha-556988-m03" control-plane node in "ha-556988" cluster
	I1016 18:59:38.468557  337340 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:59:38.471645  337340 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:59:38.474579  337340 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:59:38.474688  337340 cache.go:58] Caching tarball of preloaded images
	I1016 18:59:38.474647  337340 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:59:38.475030  337340 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 18:59:38.475073  337340 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:59:38.475235  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:38.500130  337340 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:59:38.500149  337340 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:59:38.500163  337340 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:59:38.500186  337340 start.go:360] acquireMachinesLock for ha-556988-m03: {Name:mk34d9a60e195460efb0e14fede3a8b24d8e28a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:59:38.500240  337340 start.go:364] duration metric: took 38.999µs to acquireMachinesLock for "ha-556988-m03"
	I1016 18:59:38.500259  337340 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:59:38.500264  337340 fix.go:54] fixHost starting: m03
	I1016 18:59:38.500516  337340 cli_runner.go:164] Run: docker container inspect ha-556988-m03 --format={{.State.Status}}
	I1016 18:59:38.520771  337340 fix.go:112] recreateIfNeeded on ha-556988-m03: state=Stopped err=<nil>
	W1016 18:59:38.520796  337340 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:59:38.523984  337340 out.go:252] * Restarting existing docker container for "ha-556988-m03" ...
	I1016 18:59:38.524069  337340 cli_runner.go:164] Run: docker start ha-556988-m03
	I1016 18:59:38.865706  337340 cli_runner.go:164] Run: docker container inspect ha-556988-m03 --format={{.State.Status}}
	I1016 18:59:38.891919  337340 kic.go:430] container "ha-556988-m03" state is running.
	I1016 18:59:38.895965  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m03
	I1016 18:59:38.924344  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:38.924714  337340 machine.go:93] provisionDockerMachine start ...
	I1016 18:59:38.924805  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:38.953535  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:38.953854  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1016 18:59:38.954163  337340 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:59:38.955105  337340 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 18:59:42.156520  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988-m03
	
	I1016 18:59:42.156559  337340 ubuntu.go:182] provisioning hostname "ha-556988-m03"
	I1016 18:59:42.156649  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:42.195862  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:42.196197  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1016 18:59:42.196217  337340 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-556988-m03 && echo "ha-556988-m03" | sudo tee /etc/hostname
	I1016 18:59:42.415761  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988-m03
	
	I1016 18:59:42.415927  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:42.448329  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:42.448631  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1016 18:59:42.448648  337340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-556988-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-556988-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-556988-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:59:42.655633  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:59:42.655699  337340 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 18:59:42.655755  337340 ubuntu.go:190] setting up certificates
	I1016 18:59:42.655798  337340 provision.go:84] configureAuth start
	I1016 18:59:42.655888  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m03
	I1016 18:59:42.682731  337340 provision.go:143] copyHostCerts
	I1016 18:59:42.682774  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:59:42.682809  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 18:59:42.682816  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:59:42.682894  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 18:59:42.683003  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:59:42.683029  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 18:59:42.683034  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:59:42.683063  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 18:59:42.683113  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:59:42.683134  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 18:59:42.683138  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:59:42.683162  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 18:59:42.683208  337340 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.ha-556988-m03 san=[127.0.0.1 192.168.49.4 ha-556988-m03 localhost minikube]
	I1016 18:59:42.986072  337340 provision.go:177] copyRemoteCerts
	I1016 18:59:42.986191  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:59:42.986266  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:43.009339  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:59:43.190424  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1016 18:59:43.190488  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 18:59:43.234240  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1016 18:59:43.234303  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1016 18:59:43.271524  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1016 18:59:43.271634  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1016 18:59:43.309031  337340 provision.go:87] duration metric: took 653.205044ms to configureAuth
	I1016 18:59:43.309101  337340 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:59:43.309396  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:43.309551  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:43.341419  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:43.341745  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1016 18:59:43.341761  337340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:59:43.818670  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:59:43.818698  337340 machine.go:96] duration metric: took 4.89396612s to provisionDockerMachine
	I1016 18:59:43.818717  337340 start.go:293] postStartSetup for "ha-556988-m03" (driver="docker")
	I1016 18:59:43.818729  337340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:59:43.818800  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:59:43.818847  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:43.843907  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:59:43.949206  337340 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:59:43.952687  337340 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:59:43.952714  337340 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:59:43.952725  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 18:59:43.952777  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 18:59:43.952858  337340 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 18:59:43.952870  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /etc/ssl/certs/2903122.pem
	I1016 18:59:43.952966  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:59:43.960926  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:59:43.978806  337340 start.go:296] duration metric: took 160.073239ms for postStartSetup
	I1016 18:59:43.978931  337340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:59:43.979022  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:43.996302  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:59:44.105727  337340 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:59:44.111903  337340 fix.go:56] duration metric: took 5.611630616s for fixHost
	I1016 18:59:44.111982  337340 start.go:83] releasing machines lock for "ha-556988-m03", held for 5.611732928s
	I1016 18:59:44.112098  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m03
	I1016 18:59:44.134145  337340 out.go:179] * Found network options:
	I1016 18:59:44.137067  337340 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1016 18:59:44.139998  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	W1016 18:59:44.140032  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	W1016 18:59:44.140058  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	W1016 18:59:44.140075  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	I1016 18:59:44.140162  337340 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:59:44.140230  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:44.140496  337340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:59:44.140567  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:44.164491  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:59:44.165069  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:59:44.454001  337340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:59:44.465509  337340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:59:44.465581  337340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:59:44.480708  337340 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:59:44.480733  337340 start.go:495] detecting cgroup driver to use...
	I1016 18:59:44.480764  337340 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 18:59:44.480811  337340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:59:44.509331  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:59:44.557844  337340 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:59:44.557910  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:59:44.588703  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:59:44.608697  337340 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:59:44.891467  337340 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:59:45.246520  337340 docker.go:234] disabling docker service ...
	I1016 18:59:45.246692  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:59:45.273127  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:59:45.348286  337340 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:59:45.631385  337340 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:59:45.856092  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:59:45.872650  337340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:59:45.898496  337340 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:59:45.898570  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.916170  337340 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 18:59:45.916240  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.931066  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.942127  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.952558  337340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:59:45.963182  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.973482  337340 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.986310  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.996358  337340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:59:46.016551  337340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:59:46.027307  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:59:46.234905  337340 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 19:01:16.580381  337340 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.345368285s)
	I1016 19:01:16.580410  337340 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 19:01:16.580469  337340 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 19:01:16.585512  337340 start.go:563] Will wait 60s for crictl version
	I1016 19:01:16.585597  337340 ssh_runner.go:195] Run: which crictl
	I1016 19:01:16.589679  337340 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 19:01:16.622370  337340 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 19:01:16.622451  337340 ssh_runner.go:195] Run: crio --version
	I1016 19:01:16.658490  337340 ssh_runner.go:195] Run: crio --version
	I1016 19:01:16.704130  337340 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 19:01:16.707094  337340 out.go:179]   - env NO_PROXY=192.168.49.2
	I1016 19:01:16.709928  337340 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1016 19:01:16.713018  337340 cli_runner.go:164] Run: docker network inspect ha-556988 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:01:16.729609  337340 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1016 19:01:16.733845  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:01:16.745323  337340 mustload.go:65] Loading cluster: ha-556988
	I1016 19:01:16.745573  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:01:16.745830  337340 cli_runner.go:164] Run: docker container inspect ha-556988 --format={{.State.Status}}
	I1016 19:01:16.768218  337340 host.go:66] Checking if "ha-556988" exists ...
	I1016 19:01:16.768499  337340 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988 for IP: 192.168.49.4
	I1016 19:01:16.768516  337340 certs.go:195] generating shared ca certs ...
	I1016 19:01:16.768531  337340 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:01:16.768657  337340 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 19:01:16.768700  337340 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 19:01:16.768712  337340 certs.go:257] generating profile certs ...
	I1016 19:01:16.768792  337340 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key
	I1016 19:01:16.768863  337340 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.a8cc042e
	I1016 19:01:16.768908  337340 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key
	I1016 19:01:16.768921  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1016 19:01:16.768935  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1016 19:01:16.768951  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1016 19:01:16.768967  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1016 19:01:16.768979  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1016 19:01:16.768993  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1016 19:01:16.769005  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1016 19:01:16.769021  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1016 19:01:16.769073  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 19:01:16.769107  337340 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 19:01:16.769120  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 19:01:16.769171  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 19:01:16.769198  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 19:01:16.769219  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 19:01:16.769266  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:01:16.769303  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /usr/share/ca-certificates/2903122.pem
	I1016 19:01:16.769321  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:01:16.769333  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem -> /usr/share/ca-certificates/290312.pem
	I1016 19:01:16.769395  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 19:01:16.790995  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 19:01:16.889480  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1016 19:01:16.893451  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1016 19:01:16.901926  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1016 19:01:16.905634  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1016 19:01:16.914578  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1016 19:01:16.918356  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1016 19:01:16.926812  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1016 19:01:16.930535  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1016 19:01:16.940123  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1016 19:01:16.944094  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1016 19:01:16.953660  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1016 19:01:16.957601  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1016 19:01:16.966798  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 19:01:16.985414  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 19:01:17.016239  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 19:01:17.039046  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 19:01:17.060181  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1016 19:01:17.080570  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 19:01:17.105243  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 19:01:17.127158  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 19:01:17.146687  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 19:01:17.165827  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 19:01:17.185097  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 19:01:17.205538  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1016 19:01:17.220414  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1016 19:01:17.233996  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1016 19:01:17.248515  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1016 19:01:17.264946  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1016 19:01:17.279635  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1016 19:01:17.293984  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1016 19:01:17.308573  337340 ssh_runner.go:195] Run: openssl version
	I1016 19:01:17.315622  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 19:01:17.326067  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 19:01:17.330066  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 19:01:17.330132  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 19:01:17.373334  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 19:01:17.382328  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 19:01:17.393741  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:01:17.398032  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:01:17.398108  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:01:17.446048  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 19:01:17.454686  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 19:01:17.471186  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 19:01:17.475661  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 19:01:17.475768  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 19:01:17.543984  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 19:01:17.583902  337340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 19:01:17.596353  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 19:01:17.693798  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 19:01:17.818221  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 19:01:17.876853  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 19:01:17.929859  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 19:01:18.028781  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 19:01:18.102665  337340 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1016 19:01:18.102853  337340 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-556988-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 19:01:18.102905  337340 kube-vip.go:115] generating kube-vip config ...
	I1016 19:01:18.102986  337340 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1016 19:01:18.130313  337340 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1016 19:01:18.130424  337340 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1016 19:01:18.130517  337340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 19:01:18.145569  337340 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 19:01:18.145719  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1016 19:01:18.158741  337340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1016 19:01:18.175520  337340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 19:01:18.201069  337340 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1016 19:01:18.223378  337340 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1016 19:01:18.230855  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:01:18.262619  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:01:18.515974  337340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:01:18.534144  337340 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:01:18.534496  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:01:18.537694  337340 out.go:179] * Verifying Kubernetes components...
	I1016 19:01:18.540519  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:01:18.853344  337340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:01:18.870280  337340 kapi.go:59] client config for ha-556988: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key", CAFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1016 19:01:18.870409  337340 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1016 19:01:18.870686  337340 node_ready.go:35] waiting up to 6m0s for node "ha-556988-m03" to be "Ready" ...
	W1016 19:01:20.874310  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:22.875099  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:24.875540  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:27.374249  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:29.375013  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:31.874737  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:34.373989  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:36.375778  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:38.874593  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:40.874828  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:42.875042  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:45.378712  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:47.875029  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:49.875081  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:52.374191  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:54.374870  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:56.874176  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:58.874680  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:00.875335  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:03.374728  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:05.874729  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:07.874820  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:10.374640  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:12.374741  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:14.375254  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:16.874287  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:19.375567  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:21.874303  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:24.374724  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:26.874201  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:28.875139  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:30.875913  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:32.876533  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:35.374093  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:37.374317  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:39.873972  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:41.874678  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:44.374313  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:46.374843  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:48.375268  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:50.874442  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:52.874670  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:54.876042  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:57.374242  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:59.374764  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:01.375629  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:03.874090  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:05.874933  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:07.874988  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:10.375278  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:12.875217  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:15.374125  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:17.374601  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:19.874402  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:21.874761  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:24.373999  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:26.374333  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:28.374800  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:30.375182  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:32.874199  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:34.875038  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:37.374178  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:39.374897  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:41.376724  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:43.875074  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:45.875991  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:48.374682  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:50.374756  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:52.874361  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:54.874691  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:57.375643  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:59.874852  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:02.374714  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:04.874203  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:07.375099  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:09.874992  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:12.375032  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:14.874592  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:17.374337  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:19.375719  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:21.874855  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:23.875005  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:26.374357  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:28.874350  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:31.374814  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:33.375229  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:35.376366  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:37.875161  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:40.374398  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:42.375093  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:44.375288  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:46.874677  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:49.374853  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:51.874402  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:53.874728  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:56.374314  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:58.374922  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:00.398713  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:02.874327  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:04.875407  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:07.374991  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:09.375065  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:11.874375  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:13.875021  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:15.875906  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:18.374204  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:20.375019  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:22.874356  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:24.874622  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:26.874889  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:29.374262  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:31.375054  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:33.408848  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:35.874199  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:37.874785  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:39.875878  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:42.374064  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:44.374403  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:46.874583  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:49.375025  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:51.875263  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:54.374635  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:56.374838  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:58.874718  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:01.374046  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:03.874734  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:06.374348  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:08.874846  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:10.875133  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:13.373809  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:15.374383  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:17.374643  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:19.375329  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:21.874529  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:23.874845  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:26.374245  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:28.874069  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:30.874264  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:32.874477  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:35.374326  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:37.874249  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:39.874482  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:41.875383  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:44.374077  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:46.374372  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:48.874600  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:50.874741  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:53.375464  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:55.875061  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:58.374676  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:00.377657  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:02.384684  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:04.874707  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:06.875283  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:09.374694  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:11.874370  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:14.375095  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:16.874880  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	I1016 19:07:18.870877  337340 node_ready.go:38] duration metric: took 6m0.000146858s for node "ha-556988-m03" to be "Ready" ...
	I1016 19:07:18.873970  337340 out.go:203] 
	W1016 19:07:18.876680  337340 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1016 19:07:18.876697  337340 out.go:285] * 
	* 
	W1016 19:07:18.878873  337340 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 19:07:18.881589  337340 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-arm64 -p ha-556988 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-556988
helpers_test.go:243: (dbg) docker inspect ha-556988:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000",
	        "Created": "2025-10-16T18:53:20.826320924Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 337466,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:58:51.979830748Z",
	            "FinishedAt": "2025-10-16T18:58:51.377562063Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/hosts",
	        "LogPath": "/var/lib/docker/containers/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000-json.log",
	        "Name": "/ha-556988",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-556988:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-556988",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000",
	                "LowerDir": "/var/lib/docker/overlay2/b9e7c420d869ffe9f26b11e5160a4483ad085f1084b3df4806e005b1dcac6796-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b9e7c420d869ffe9f26b11e5160a4483ad085f1084b3df4806e005b1dcac6796/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b9e7c420d869ffe9f26b11e5160a4483ad085f1084b3df4806e005b1dcac6796/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b9e7c420d869ffe9f26b11e5160a4483ad085f1084b3df4806e005b1dcac6796/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-556988",
	                "Source": "/var/lib/docker/volumes/ha-556988/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-556988",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-556988",
	                "name.minikube.sigs.k8s.io": "ha-556988",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "065c5d4e8a096d5f9ffdf9b63e7c2cb496f2eb5bb12369ce1f2bda60d9a79e64",
	            "SandboxKey": "/var/run/docker/netns/065c5d4e8a09",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-556988": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:e9:5a:29:59:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7adcf17f22baf4ae9b9dbf2b45e75904ea1540233e225aef4731989fd57a7fcc",
	                    "EndpointID": "6a0543cc77855a1155f456a458b934e2cd29f8314af96acb35727ae6ed5a96c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-556988",
	                        "ee539784e727"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-556988 -n ha-556988
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-556988 logs -n 25: (1.599287291s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-556988 cp ha-556988-m03:/home/docker/cp-test.txt ha-556988-m02:/home/docker/cp-test_ha-556988-m03_ha-556988-m02.txt               │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m02 sudo cat /home/docker/cp-test_ha-556988-m03_ha-556988-m02.txt                                         │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ cp      │ ha-556988 cp ha-556988-m03:/home/docker/cp-test.txt ha-556988-m04:/home/docker/cp-test_ha-556988-m03_ha-556988-m04.txt               │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m04 sudo cat /home/docker/cp-test_ha-556988-m03_ha-556988-m04.txt                                         │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ cp      │ ha-556988 cp testdata/cp-test.txt ha-556988-m04:/home/docker/cp-test.txt                                                             │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ cp      │ ha-556988 cp ha-556988-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2002313520/001/cp-test_ha-556988-m04.txt │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ cp      │ ha-556988 cp ha-556988-m04:/home/docker/cp-test.txt ha-556988:/home/docker/cp-test_ha-556988-m04_ha-556988.txt                       │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988 sudo cat /home/docker/cp-test_ha-556988-m04_ha-556988.txt                                                 │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ cp      │ ha-556988 cp ha-556988-m04:/home/docker/cp-test.txt ha-556988-m02:/home/docker/cp-test_ha-556988-m04_ha-556988-m02.txt               │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m02 sudo cat /home/docker/cp-test_ha-556988-m04_ha-556988-m02.txt                                         │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ cp      │ ha-556988 cp ha-556988-m04:/home/docker/cp-test.txt ha-556988-m03:/home/docker/cp-test_ha-556988-m04_ha-556988-m03.txt               │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m03 sudo cat /home/docker/cp-test_ha-556988-m04_ha-556988-m03.txt                                         │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ node    │ ha-556988 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ node    │ ha-556988 node start m02 --alsologtostderr -v 5                                                                                      │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:58 UTC │
	│ node    │ ha-556988 node list --alsologtostderr -v 5                                                                                           │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:58 UTC │                     │
	│ stop    │ ha-556988 stop --alsologtostderr -v 5                                                                                                │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:58 UTC │ 16 Oct 25 18:58 UTC │
	│ start   │ ha-556988 start --wait true --alsologtostderr -v 5                                                                                   │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:58 UTC │                     │
	│ node    │ ha-556988 node list --alsologtostderr -v 5                                                                                           │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 19:07 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:58:51
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:58:51.718625  337340 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:58:51.718820  337340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:58:51.718832  337340 out.go:374] Setting ErrFile to fd 2...
	I1016 18:58:51.718837  337340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:58:51.719085  337340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:58:51.719452  337340 out.go:368] Setting JSON to false
	I1016 18:58:51.720287  337340 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6061,"bootTime":1760635071,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 18:58:51.720360  337340 start.go:141] virtualization:  
	I1016 18:58:51.723622  337340 out.go:179] * [ha-556988] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 18:58:51.727453  337340 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:58:51.727561  337340 notify.go:220] Checking for updates...
	I1016 18:58:51.733207  337340 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:58:51.736137  337340 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:58:51.738974  337340 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 18:58:51.741951  337340 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 18:58:51.744907  337340 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:58:51.748268  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:58:51.748399  337340 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:58:51.772958  337340 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 18:58:51.773087  337340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:58:51.833709  337340 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-16 18:58:51.824777239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:58:51.833825  337340 docker.go:318] overlay module found
	I1016 18:58:51.836939  337340 out.go:179] * Using the docker driver based on existing profile
	I1016 18:58:51.839798  337340 start.go:305] selected driver: docker
	I1016 18:58:51.839818  337340 start.go:925] validating driver "docker" against &{Name:ha-556988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:58:51.839961  337340 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:58:51.840070  337340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:58:51.894329  337340 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-16 18:58:51.884487993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:58:51.894716  337340 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:58:51.894754  337340 cni.go:84] Creating CNI manager for ""
	I1016 18:58:51.894821  337340 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1016 18:58:51.894871  337340 start.go:349] cluster config:
	{Name:ha-556988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:58:51.898184  337340 out.go:179] * Starting "ha-556988" primary control-plane node in "ha-556988" cluster
	I1016 18:58:51.901075  337340 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:58:51.904106  337340 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:58:51.906904  337340 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:58:51.906960  337340 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 18:58:51.906971  337340 cache.go:58] Caching tarball of preloaded images
	I1016 18:58:51.906995  337340 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:58:51.907065  337340 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 18:58:51.907074  337340 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:58:51.907213  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:58:51.927032  337340 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:58:51.927054  337340 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:58:51.927071  337340 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:58:51.927094  337340 start.go:360] acquireMachinesLock for ha-556988: {Name:mk71c3a6201989099f6bf114603feb8455c41f5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:58:51.927153  337340 start.go:364] duration metric: took 41.945µs to acquireMachinesLock for "ha-556988"
	I1016 18:58:51.927187  337340 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:58:51.927198  337340 fix.go:54] fixHost starting: 
	I1016 18:58:51.927452  337340 cli_runner.go:164] Run: docker container inspect ha-556988 --format={{.State.Status}}
	I1016 18:58:51.944496  337340 fix.go:112] recreateIfNeeded on ha-556988: state=Stopped err=<nil>
	W1016 18:58:51.944531  337340 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:58:51.947809  337340 out.go:252] * Restarting existing docker container for "ha-556988" ...
	I1016 18:58:51.947886  337340 cli_runner.go:164] Run: docker start ha-556988
	I1016 18:58:52.211064  337340 cli_runner.go:164] Run: docker container inspect ha-556988 --format={{.State.Status}}
	I1016 18:58:52.238130  337340 kic.go:430] container "ha-556988" state is running.
	I1016 18:58:52.238496  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988
	I1016 18:58:52.265254  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:58:52.265525  337340 machine.go:93] provisionDockerMachine start ...
	I1016 18:58:52.265595  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:52.289105  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:58:52.289561  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1016 18:58:52.289576  337340 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:58:52.290191  337340 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 18:58:55.440597  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988
	
	I1016 18:58:55.440631  337340 ubuntu.go:182] provisioning hostname "ha-556988"
	I1016 18:58:55.440701  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:55.458200  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:58:55.458510  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1016 18:58:55.458528  337340 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-556988 && echo "ha-556988" | sudo tee /etc/hostname
	I1016 18:58:55.615084  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988
	
	I1016 18:58:55.615165  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:55.633608  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:58:55.633925  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1016 18:58:55.633950  337340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-556988' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-556988/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-556988' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:58:55.781429  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:58:55.781454  337340 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 18:58:55.781481  337340 ubuntu.go:190] setting up certificates
	I1016 18:58:55.781490  337340 provision.go:84] configureAuth start
	I1016 18:58:55.781555  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988
	I1016 18:58:55.798617  337340 provision.go:143] copyHostCerts
	I1016 18:58:55.798664  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:58:55.798709  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 18:58:55.798730  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:58:55.798812  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 18:58:55.798915  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:58:55.798938  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 18:58:55.798949  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:58:55.798989  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 18:58:55.799046  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:58:55.799068  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 18:58:55.799078  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:58:55.799112  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 18:58:55.799198  337340 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.ha-556988 san=[127.0.0.1 192.168.49.2 ha-556988 localhost minikube]
	I1016 18:58:56.377628  337340 provision.go:177] copyRemoteCerts
	I1016 18:58:56.377703  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:58:56.377743  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:56.397097  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:58:56.500593  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1016 18:58:56.500663  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:58:56.518370  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1016 18:58:56.518433  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 18:58:56.536547  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1016 18:58:56.536628  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1016 18:58:56.555074  337340 provision.go:87] duration metric: took 773.569729ms to configureAuth
	I1016 18:58:56.555099  337340 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:58:56.555326  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:58:56.555445  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:56.572643  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:58:56.572965  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1016 18:58:56.572986  337340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:58:56.890339  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:58:56.890428  337340 machine.go:96] duration metric: took 4.624892872s to provisionDockerMachine
	I1016 18:58:56.890454  337340 start.go:293] postStartSetup for "ha-556988" (driver="docker")
	I1016 18:58:56.890480  337340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:58:56.890607  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:58:56.890683  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:56.913382  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:58:57.017075  337340 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:58:57.021857  337340 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:58:57.021887  337340 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:58:57.021899  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 18:58:57.021965  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 18:58:57.022045  337340 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 18:58:57.022052  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /etc/ssl/certs/2903122.pem
	I1016 18:58:57.022160  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:58:57.030852  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:58:57.048968  337340 start.go:296] duration metric: took 158.482858ms for postStartSetup
	I1016 18:58:57.049157  337340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:58:57.049222  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:57.066845  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:58:57.166118  337340 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:58:57.170752  337340 fix.go:56] duration metric: took 5.243547354s for fixHost
	I1016 18:58:57.170779  337340 start.go:83] releasing machines lock for "ha-556988", held for 5.243610027s
	I1016 18:58:57.170862  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988
	I1016 18:58:57.187672  337340 ssh_runner.go:195] Run: cat /version.json
	I1016 18:58:57.187699  337340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:58:57.187723  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:57.187757  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:57.206208  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:58:57.213346  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:58:57.391366  337340 ssh_runner.go:195] Run: systemctl --version
	I1016 18:58:57.397910  337340 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:58:57.434230  337340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:58:57.439686  337340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:58:57.439757  337340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:58:57.447828  337340 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:58:57.447851  337340 start.go:495] detecting cgroup driver to use...
	I1016 18:58:57.447886  337340 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 18:58:57.447952  337340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:58:57.463944  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:58:57.477406  337340 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:58:57.477468  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:58:57.493693  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:58:57.507255  337340 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:58:57.614114  337340 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:58:57.729976  337340 docker.go:234] disabling docker service ...
	I1016 18:58:57.730050  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:58:57.745940  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:58:57.758869  337340 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:58:57.875693  337340 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:58:57.984271  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:58:57.997324  337340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:58:58.012287  337340 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:58:58.012387  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.023645  337340 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 18:58:58.023740  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.036244  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.046489  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.055569  337340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:58:58.065264  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.075123  337340 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.084654  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.094603  337340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:58:58.102554  337340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:58:58.110013  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:58:58.218071  337340 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:58:58.347916  337340 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:58:58.348026  337340 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:58:58.351852  337340 start.go:563] Will wait 60s for crictl version
	I1016 18:58:58.351953  337340 ssh_runner.go:195] Run: which crictl
	I1016 18:58:58.355554  337340 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:58:58.382893  337340 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:58:58.383032  337340 ssh_runner.go:195] Run: crio --version
	I1016 18:58:58.410837  337340 ssh_runner.go:195] Run: crio --version
	I1016 18:58:58.446345  337340 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:58:58.449238  337340 cli_runner.go:164] Run: docker network inspect ha-556988 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:58:58.465498  337340 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1016 18:58:58.469406  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:58:58.479415  337340 kubeadm.go:883] updating cluster {Name:ha-556988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:58:58.479566  337340 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:58:58.479620  337340 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:58:58.516159  337340 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:58:58.516181  337340 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:58:58.516239  337340 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:58:58.543999  337340 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:58:58.544030  337340 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:58:58.544040  337340 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1016 18:58:58.544140  337340 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-556988 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:58:58.544225  337340 ssh_runner.go:195] Run: crio config
	I1016 18:58:58.618937  337340 cni.go:84] Creating CNI manager for ""
	I1016 18:58:58.618957  337340 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1016 18:58:58.618981  337340 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:58:58.619008  337340 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-556988 NodeName:ha-556988 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:58:58.619133  337340 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-556988"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:58:58.619160  337340 kube-vip.go:115] generating kube-vip config ...
	I1016 18:58:58.619222  337340 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1016 18:58:58.631579  337340 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:58:58.631697  337340 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1016 18:58:58.631769  337340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:58:58.640083  337340 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:58:58.640188  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1016 18:58:58.648089  337340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1016 18:58:58.661375  337340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:58:58.674583  337340 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1016 18:58:58.687345  337340 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1016 18:58:58.700772  337340 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1016 18:58:58.704503  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:58:58.714276  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:58:58.833486  337340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:58:58.851263  337340 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988 for IP: 192.168.49.2
	I1016 18:58:58.851288  337340 certs.go:195] generating shared ca certs ...
	I1016 18:58:58.851306  337340 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:58:58.851471  337340 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 18:58:58.851524  337340 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 18:58:58.851537  337340 certs.go:257] generating profile certs ...
	I1016 18:58:58.851633  337340 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key
	I1016 18:58:58.851666  337340 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.1de6c797
	I1016 18:58:58.851690  337340 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt.1de6c797 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1016 18:58:59.152876  337340 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt.1de6c797 ...
	I1016 18:58:59.152960  337340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt.1de6c797: {Name:mk3d22e55d5c37c04716dc4d1ee3cbc4538fbdc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:58:59.153223  337340 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.1de6c797 ...
	I1016 18:58:59.153265  337340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.1de6c797: {Name:mkda3eb1676258b3c7a46448934b59023d353a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:58:59.153432  337340 certs.go:382] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt.1de6c797 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt
	I1016 18:58:59.153636  337340 certs.go:386] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.1de6c797 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key
	I1016 18:58:59.153853  337340 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key
	I1016 18:58:59.153891  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1016 18:58:59.153923  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1016 18:58:59.153965  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1016 18:58:59.153998  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1016 18:58:59.154028  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1016 18:58:59.154076  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1016 18:58:59.154112  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1016 18:58:59.154143  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1016 18:58:59.154239  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 18:58:59.154300  337340 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 18:58:59.154325  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 18:58:59.154381  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 18:58:59.154435  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:58:59.154491  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 18:58:59.154609  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:58:59.154690  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:58:59.154737  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem -> /usr/share/ca-certificates/290312.pem
	I1016 18:58:59.154771  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /usr/share/ca-certificates/2903122.pem
	I1016 18:58:59.155500  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:58:59.174654  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 18:58:59.194053  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:58:59.220036  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 18:58:59.241089  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1016 18:58:59.259308  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 18:58:59.276555  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:58:59.293855  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:58:59.311467  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:58:59.329708  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 18:58:59.347304  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 18:58:59.364602  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:58:59.377635  337340 ssh_runner.go:195] Run: openssl version
	I1016 18:58:59.384255  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 18:58:59.393733  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 18:58:59.397737  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 18:58:59.397824  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 18:58:59.438696  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:58:59.446893  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:58:59.455572  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:58:59.459600  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:58:59.459668  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:58:59.500823  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:58:59.509003  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 18:58:59.520724  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 18:58:59.528394  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 18:58:59.528467  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 18:58:59.578056  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 18:58:59.586838  337340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:58:59.594144  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:58:59.638647  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:58:59.694080  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:58:59.765575  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:58:59.865472  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:58:59.931581  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:58:59.986682  337340 kubeadm.go:400] StartCluster: {Name:ha-556988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:58:59.986889  337340 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:58:59.986987  337340 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:59:00.020883  337340 cri.go:89] found id: "a6a97464c4b58734820a4c747fbaa58980bfcb3cdc5b94d0a49804bd9ecaf2d2"
	I1016 18:59:00.020964  337340 cri.go:89] found id: "37de0677d02917c07b70727749f73f2b0b33bfa000e9e137a54da309d14e7ae7"
	I1016 18:59:00.020984  337340 cri.go:89] found id: "13005c03c7e831233e329dc3df5f63331cf23a4ab71c78d67d200baaff30b9bf"
	I1016 18:59:00.021007  337340 cri.go:89] found id: "ccd1663977e230bbda3cae69e035a19bb725c3f88efd4340e2acdb82e35b17b4"
	I1016 18:59:00.021041  337340 cri.go:89] found id: "0947527fb7c6600575f80d864636e177c1330efa7ab3caff116116cd0d07fe91"
	I1016 18:59:00.021071  337340 cri.go:89] found id: ""
	I1016 18:59:00.021222  337340 ssh_runner.go:195] Run: sudo runc list -f json
	W1016 18:59:00.048970  337340 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:59:00Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:59:00.049191  337340 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:59:00.064913  337340 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 18:59:00.065020  337340 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 18:59:00.065128  337340 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 18:59:00.081513  337340 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:59:00.082142  337340 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-556988" does not appear in /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:59:00.082376  337340 kubeconfig.go:62] /home/jenkins/minikube-integration/21738-288457/kubeconfig needs updating (will repair): [kubeconfig missing "ha-556988" cluster setting kubeconfig missing "ha-556988" context setting]
	I1016 18:59:00.082852  337340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:59:00.083778  337340 kapi.go:59] client config for ha-556988: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key", CAFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1016 18:59:00.084642  337340 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1016 18:59:00.084775  337340 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1016 18:59:00.084800  337340 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1016 18:59:00.084835  337340 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1016 18:59:00.084861  337340 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1016 18:59:00.084885  337340 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1016 18:59:00.085481  337340 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 18:59:00.133777  337340 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1016 18:59:00.133865  337340 kubeadm.go:601] duration metric: took 68.819342ms to restartPrimaryControlPlane
	I1016 18:59:00.133892  337340 kubeadm.go:402] duration metric: took 147.219085ms to StartCluster
	I1016 18:59:00.133962  337340 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:59:00.134087  337340 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:59:00.134991  337340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:59:00.135381  337340 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:59:00.135451  337340 start.go:241] waiting for startup goroutines ...
	I1016 18:59:00.135503  337340 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:59:00.136478  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:00.165207  337340 out.go:179] * Enabled addons: 
	I1016 18:59:00.168421  337340 addons.go:514] duration metric: took 32.907014ms for enable addons: enabled=[]
	I1016 18:59:00.168517  337340 start.go:246] waiting for cluster config update ...
	I1016 18:59:00.168542  337340 start.go:255] writing updated cluster config ...
	I1016 18:59:00.191362  337340 out.go:203] 
	I1016 18:59:00.209821  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:00.209961  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:00.213495  337340 out.go:179] * Starting "ha-556988-m02" control-plane node in "ha-556988" cluster
	I1016 18:59:00.216452  337340 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:59:00.223747  337340 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:59:00.226672  337340 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:59:00.226714  337340 cache.go:58] Caching tarball of preloaded images
	I1016 18:59:00.226842  337340 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 18:59:00.226852  337340 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:59:00.227106  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:00.227394  337340 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:59:00.266622  337340 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:59:00.266645  337340 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:59:00.266659  337340 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:59:00.266685  337340 start.go:360] acquireMachinesLock for ha-556988-m02: {Name:mkb742ea24d411e97f6bd75961598d91ba358bd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:59:00.266743  337340 start.go:364] duration metric: took 41.445µs to acquireMachinesLock for "ha-556988-m02"
	I1016 18:59:00.266766  337340 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:59:00.266772  337340 fix.go:54] fixHost starting: m02
	I1016 18:59:00.267061  337340 cli_runner.go:164] Run: docker container inspect ha-556988-m02 --format={{.State.Status}}
	I1016 18:59:00.297319  337340 fix.go:112] recreateIfNeeded on ha-556988-m02: state=Stopped err=<nil>
	W1016 18:59:00.297360  337340 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:59:00.300819  337340 out.go:252] * Restarting existing docker container for "ha-556988-m02" ...
	I1016 18:59:00.300940  337340 cli_runner.go:164] Run: docker start ha-556988-m02
	I1016 18:59:00.708144  337340 cli_runner.go:164] Run: docker container inspect ha-556988-m02 --format={{.State.Status}}
	I1016 18:59:00.733543  337340 kic.go:430] container "ha-556988-m02" state is running.
	I1016 18:59:00.733902  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m02
	I1016 18:59:00.760804  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:00.761309  337340 machine.go:93] provisionDockerMachine start ...
	I1016 18:59:00.761403  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:00.808146  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:00.808685  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1016 18:59:00.808701  337340 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:59:00.809303  337340 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40522->127.0.0.1:33183: read: connection reset by peer
	I1016 18:59:04.034070  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988-m02
	
	I1016 18:59:04.034139  337340 ubuntu.go:182] provisioning hostname "ha-556988-m02"
	I1016 18:59:04.034243  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:04.063655  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:04.063975  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1016 18:59:04.063993  337340 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-556988-m02 && echo "ha-556988-m02" | sudo tee /etc/hostname
	I1016 18:59:04.267030  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988-m02
	
	I1016 18:59:04.267113  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:04.300780  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:04.301103  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1016 18:59:04.301127  337340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-556988-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-556988-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-556988-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:59:04.469711  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:59:04.469796  337340 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 18:59:04.469828  337340 ubuntu.go:190] setting up certificates
	I1016 18:59:04.469864  337340 provision.go:84] configureAuth start
	I1016 18:59:04.469974  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m02
	I1016 18:59:04.508993  337340 provision.go:143] copyHostCerts
	I1016 18:59:04.509035  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:59:04.509067  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 18:59:04.509074  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:59:04.509305  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 18:59:04.509422  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:59:04.509441  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 18:59:04.509446  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:59:04.509496  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 18:59:04.509545  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:59:04.509562  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 18:59:04.509566  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:59:04.509591  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 18:59:04.509649  337340 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.ha-556988-m02 san=[127.0.0.1 192.168.49.3 ha-556988-m02 localhost minikube]
	I1016 18:59:05.303068  337340 provision.go:177] copyRemoteCerts
	I1016 18:59:05.303142  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:59:05.303195  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:05.322174  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 18:59:05.428054  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1016 18:59:05.428132  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:59:05.461825  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1016 18:59:05.461888  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 18:59:05.487317  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1016 18:59:05.487378  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1016 18:59:05.516798  337340 provision.go:87] duration metric: took 1.046901762s to configureAuth
	I1016 18:59:05.516822  337340 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:59:05.517061  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:05.517252  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:05.546833  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:05.547150  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1016 18:59:05.547168  337340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:59:05.937754  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:59:05.937782  337340 machine.go:96] duration metric: took 5.176458229s to provisionDockerMachine
	I1016 18:59:05.937802  337340 start.go:293] postStartSetup for "ha-556988-m02" (driver="docker")
	I1016 18:59:05.937814  337340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:59:05.937890  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:59:05.937937  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:05.955324  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 18:59:06.057291  337340 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:59:06.060623  337340 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:59:06.060656  337340 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:59:06.060668  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 18:59:06.060728  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 18:59:06.060812  337340 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 18:59:06.060824  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /etc/ssl/certs/2903122.pem
	I1016 18:59:06.060930  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:59:06.068899  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:59:06.087392  337340 start.go:296] duration metric: took 149.572621ms for postStartSetup
	I1016 18:59:06.087476  337340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:59:06.087533  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:06.109477  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 18:59:06.222886  337340 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:59:06.229852  337340 fix.go:56] duration metric: took 5.963072953s for fixHost
	I1016 18:59:06.229883  337340 start.go:83] releasing machines lock for "ha-556988-m02", held for 5.963130679s
	I1016 18:59:06.229963  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m02
	I1016 18:59:06.266689  337340 out.go:179] * Found network options:
	I1016 18:59:06.273332  337340 out.go:179]   - NO_PROXY=192.168.49.2
	W1016 18:59:06.276561  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	W1016 18:59:06.276606  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	I1016 18:59:06.276683  337340 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:59:06.276749  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:06.276754  337340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:59:06.276816  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:06.317825  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 18:59:06.323025  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 18:59:06.671873  337340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:59:06.677594  337340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:59:06.677732  337340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:59:06.690261  337340 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:59:06.690335  337340 start.go:495] detecting cgroup driver to use...
	I1016 18:59:06.690384  337340 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 18:59:06.690471  337340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:59:06.714650  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:59:06.733867  337340 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:59:06.733929  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:59:06.752522  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:59:06.775910  337340 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:59:06.992043  337340 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:59:07.227541  337340 docker.go:234] disabling docker service ...
	I1016 18:59:07.227607  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:59:07.250512  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:59:07.276078  337340 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:59:07.484122  337340 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:59:07.729089  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:59:07.767438  337340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:59:07.809637  337340 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:59:07.809753  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.832720  337340 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 18:59:07.832842  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.859881  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.889284  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.901694  337340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:59:07.922354  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.941649  337340 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.951572  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.961513  337340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:59:07.970666  337340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:59:07.978742  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:59:08.323908  337340 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:59:09.667321  337340 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.343330778s)
	I1016 18:59:09.667346  337340 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:59:09.667400  337340 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:59:09.677469  337340 start.go:563] Will wait 60s for crictl version
	I1016 18:59:09.677549  337340 ssh_runner.go:195] Run: which crictl
	I1016 18:59:09.683697  337340 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:59:09.731470  337340 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:59:09.731621  337340 ssh_runner.go:195] Run: crio --version
	I1016 18:59:09.782976  337340 ssh_runner.go:195] Run: crio --version
	I1016 18:59:09.844144  337340 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:59:09.847254  337340 out.go:179]   - env NO_PROXY=192.168.49.2
	I1016 18:59:09.850158  337340 cli_runner.go:164] Run: docker network inspect ha-556988 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:59:09.881787  337340 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1016 18:59:09.886123  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:59:09.903709  337340 mustload.go:65] Loading cluster: ha-556988
	I1016 18:59:09.903953  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:09.904211  337340 cli_runner.go:164] Run: docker container inspect ha-556988 --format={{.State.Status}}
	I1016 18:59:09.944289  337340 host.go:66] Checking if "ha-556988" exists ...
	I1016 18:59:09.944603  337340 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988 for IP: 192.168.49.3
	I1016 18:59:09.944620  337340 certs.go:195] generating shared ca certs ...
	I1016 18:59:09.944638  337340 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:59:09.944779  337340 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 18:59:09.944832  337340 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 18:59:09.944844  337340 certs.go:257] generating profile certs ...
	I1016 18:59:09.944939  337340 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key
	I1016 18:59:09.945027  337340 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.2ae973c7
	I1016 18:59:09.945079  337340 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key
	I1016 18:59:09.945092  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1016 18:59:09.945106  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1016 18:59:09.945127  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1016 18:59:09.945166  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1016 18:59:09.945182  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1016 18:59:09.945202  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1016 18:59:09.945213  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1016 18:59:09.945233  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1016 18:59:09.945291  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 18:59:09.945327  337340 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 18:59:09.945341  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 18:59:09.945370  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 18:59:09.945403  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:59:09.945429  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 18:59:09.945482  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:59:09.945516  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:59:09.945534  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem -> /usr/share/ca-certificates/290312.pem
	I1016 18:59:09.945549  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /usr/share/ca-certificates/2903122.pem
	I1016 18:59:09.945612  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:59:09.972941  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:59:10.097521  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1016 18:59:10.102513  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1016 18:59:10.114147  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1016 18:59:10.119117  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1016 18:59:10.130126  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1016 18:59:10.134419  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1016 18:59:10.144627  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1016 18:59:10.148520  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1016 18:59:10.157921  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1016 18:59:10.161674  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1016 18:59:10.171535  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1016 18:59:10.175229  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1016 18:59:10.184604  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:59:10.206415  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 18:59:10.228102  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:59:10.258566  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 18:59:10.283952  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1016 18:59:10.306580  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 18:59:10.329415  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:59:10.348969  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:59:10.368321  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:59:10.387180  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 18:59:10.408929  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 18:59:10.429114  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1016 18:59:10.444245  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1016 18:59:10.458197  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1016 18:59:10.472176  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1016 18:59:10.485882  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1016 18:59:10.499848  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1016 18:59:10.515126  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1016 18:59:10.528667  337340 ssh_runner.go:195] Run: openssl version
	I1016 18:59:10.535446  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:59:10.544186  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:59:10.548237  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:59:10.548342  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:59:10.591605  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:59:10.600300  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 18:59:10.608985  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 18:59:10.612817  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 18:59:10.612923  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 18:59:10.655658  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 18:59:10.664193  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 18:59:10.673263  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 18:59:10.677209  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 18:59:10.677288  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 18:59:10.718855  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:59:10.726829  337340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:59:10.730876  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:59:10.773328  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:59:10.815232  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:59:10.858016  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:59:10.899603  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:59:10.942507  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:59:10.988343  337340 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1016 18:59:10.988480  337340 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-556988-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:59:10.988535  337340 kube-vip.go:115] generating kube-vip config ...
	I1016 18:59:10.988601  337340 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1016 18:59:11.002298  337340 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:59:11.002415  337340 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1016 18:59:11.002494  337340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:59:11.011536  337340 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:59:11.011651  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1016 18:59:11.021905  337340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1016 18:59:11.037889  337340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:59:11.051536  337340 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1016 18:59:11.069953  337340 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1016 18:59:11.074152  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:59:11.086164  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:59:11.252847  337340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:59:11.266706  337340 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:59:11.267048  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:11.273634  337340 out.go:179] * Verifying Kubernetes components...
	I1016 18:59:11.276480  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:59:11.421023  337340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:59:11.436654  337340 kapi.go:59] client config for ha-556988: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key", CAFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1016 18:59:11.436746  337340 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1016 18:59:11.437099  337340 node_ready.go:35] waiting up to 6m0s for node "ha-556988-m02" to be "Ready" ...
	I1016 18:59:34.862749  337340 node_ready.go:49] node "ha-556988-m02" is "Ready"
	I1016 18:59:34.862783  337340 node_ready.go:38] duration metric: took 23.425601966s for node "ha-556988-m02" to be "Ready" ...
	I1016 18:59:34.862797  337340 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:59:34.862859  337340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:59:34.885329  337340 api_server.go:72] duration metric: took 23.618240686s to wait for apiserver process to appear ...
	I1016 18:59:34.885358  337340 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:59:34.885377  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:34.897604  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:34.897640  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:35.386323  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:35.400088  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:35.400123  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:35.885493  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:35.987319  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:35.987359  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:36.385456  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:36.412352  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:36.412390  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:36.885906  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:36.906763  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:36.906805  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:37.386256  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:37.404132  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:37.404163  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:37.885488  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:37.894320  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:37.894358  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:38.385493  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:38.394925  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1016 18:59:38.395973  337340 api_server.go:141] control plane version: v1.34.1
	I1016 18:59:38.396011  337340 api_server.go:131] duration metric: took 3.51063495s to wait for apiserver health ...
	I1016 18:59:38.396021  337340 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:59:38.401864  337340 system_pods.go:59] 26 kube-system pods found
	I1016 18:59:38.401911  337340 system_pods.go:61] "coredns-66bc5c9577-bg5gf" [e74de9d2-b737-42ff-8b64-feac035b2a70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:59:38.401923  337340 system_pods.go:61] "coredns-66bc5c9577-qnwbz" [774c649b-c0e4-4cdb-b2e8-cf72f5904899] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:59:38.401929  337340 system_pods.go:61] "etcd-ha-556988" [3e9c14ad-eae5-477f-b7c0-9dcdaf895b65] Running
	I1016 18:59:38.401935  337340 system_pods.go:61] "etcd-ha-556988-m02" [3f391bcc-813d-4db1-9aaa-258f230517fc] Running
	I1016 18:59:38.401940  337340 system_pods.go:61] "etcd-ha-556988-m03" [ea908ff8-f137-460f-9bf4-17345b1c9a66] Running
	I1016 18:59:38.401952  337340 system_pods.go:61] "kindnet-9mrmf" [45836450-4eac-49b9-a0cf-8d5a07061558] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1016 18:59:38.401957  337340 system_pods.go:61] "kindnet-c5vhh" [aadf11dc-a51d-4828-9ae1-0295e92d1c95] Running
	I1016 18:59:38.401968  337340 system_pods.go:61] "kindnet-flq9x" [aea5627f-11fc-4f3a-a968-1ca5c98d36b5] Running
	I1016 18:59:38.401972  337340 system_pods.go:61] "kindnet-qj4cl" [ef19450a-7ec3-4ccf-a5e9-c7937fd3339d] Running
	I1016 18:59:38.401979  337340 system_pods.go:61] "kube-apiserver-ha-556988" [24a555d8-f3f0-4b1c-b576-6ca1aff25a54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:59:38.401988  337340 system_pods.go:61] "kube-apiserver-ha-556988-m02" [1fc44835-ea0a-40c3-8042-f1b7e4c5c317] Running
	I1016 18:59:38.401994  337340 system_pods.go:61] "kube-apiserver-ha-556988-m03" [4c29b8ab-29b7-4dbb-8c29-18837ac4113e] Running
	I1016 18:59:38.402001  337340 system_pods.go:61] "kube-controller-manager-ha-556988" [cc4765f2-5a4b-44ce-b5da-77313d0027c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:59:38.402018  337340 system_pods.go:61] "kube-controller-manager-ha-556988-m02" [5a169a8b-1028-4629-a4b9-9cad3c765757] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:59:38.402024  337340 system_pods.go:61] "kube-controller-manager-ha-556988-m03" [ec16f7f4-acee-4d97-8cf3-20c0f326b08b] Running
	I1016 18:59:38.402030  337340 system_pods.go:61] "kube-proxy-2j2kg" [26525910-8639-4ca0-a113-d428683bd112] Running
	I1016 18:59:38.402037  337340 system_pods.go:61] "kube-proxy-dqhtm" [eee1ee0e-f145-4298-afe6-1ca41a084680] Running
	I1016 18:59:38.402041  337340 system_pods.go:61] "kube-proxy-l2lf6" [b32400f6-5ec6-4a22-87fc-4b9fb8b25976] Running
	I1016 18:59:38.402049  337340 system_pods.go:61] "kube-proxy-mx9hc" [64ee00b3-06f0-4db8-91a2-cb2bb4b25b64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1016 18:59:38.402060  337340 system_pods.go:61] "kube-scheduler-ha-556988" [37cb1ddb-9782-4e54-9793-8f2a07fe78e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:59:38.402068  337340 system_pods.go:61] "kube-scheduler-ha-556988-m02" [d819d0c4-766f-44c5-8bb9-b8f35e3d8d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:59:38.402073  337340 system_pods.go:61] "kube-scheduler-ha-556988-m03" [33286dd3-5abd-484d-abbb-8cb29c08d3ee] Running
	I1016 18:59:38.402077  337340 system_pods.go:61] "kube-vip-ha-556988" [0c7ea0da-ea3e-4fff-a76c-98b473255af9] Running
	I1016 18:59:38.402081  337340 system_pods.go:61] "kube-vip-ha-556988-m02" [850d312a-8987-4b0f-bb9e-a393a24d9b49] Running
	I1016 18:59:38.402085  337340 system_pods.go:61] "kube-vip-ha-556988-m03" [85c7549d-c836-473b-916a-e4091d8daaa4] Running
	I1016 18:59:38.402089  337340 system_pods.go:61] "storage-provisioner" [916b69a5-8ee0-43ee-87fd-9a88caebbec8] Running
	I1016 18:59:38.402095  337340 system_pods.go:74] duration metric: took 6.067311ms to wait for pod list to return data ...
	I1016 18:59:38.402109  337340 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:59:38.406892  337340 default_sa.go:45] found service account: "default"
	I1016 18:59:38.406919  337340 default_sa.go:55] duration metric: took 4.803341ms for default service account to be created ...
	I1016 18:59:38.406930  337340 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 18:59:38.413271  337340 system_pods.go:86] 26 kube-system pods found
	I1016 18:59:38.413316  337340 system_pods.go:89] "coredns-66bc5c9577-bg5gf" [e74de9d2-b737-42ff-8b64-feac035b2a70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:59:38.413326  337340 system_pods.go:89] "coredns-66bc5c9577-qnwbz" [774c649b-c0e4-4cdb-b2e8-cf72f5904899] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:59:38.413332  337340 system_pods.go:89] "etcd-ha-556988" [3e9c14ad-eae5-477f-b7c0-9dcdaf895b65] Running
	I1016 18:59:38.413337  337340 system_pods.go:89] "etcd-ha-556988-m02" [3f391bcc-813d-4db1-9aaa-258f230517fc] Running
	I1016 18:59:38.413343  337340 system_pods.go:89] "etcd-ha-556988-m03" [ea908ff8-f137-460f-9bf4-17345b1c9a66] Running
	I1016 18:59:38.413350  337340 system_pods.go:89] "kindnet-9mrmf" [45836450-4eac-49b9-a0cf-8d5a07061558] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1016 18:59:38.413355  337340 system_pods.go:89] "kindnet-c5vhh" [aadf11dc-a51d-4828-9ae1-0295e92d1c95] Running
	I1016 18:59:38.413367  337340 system_pods.go:89] "kindnet-flq9x" [aea5627f-11fc-4f3a-a968-1ca5c98d36b5] Running
	I1016 18:59:38.413379  337340 system_pods.go:89] "kindnet-qj4cl" [ef19450a-7ec3-4ccf-a5e9-c7937fd3339d] Running
	I1016 18:59:38.413390  337340 system_pods.go:89] "kube-apiserver-ha-556988" [24a555d8-f3f0-4b1c-b576-6ca1aff25a54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:59:38.413396  337340 system_pods.go:89] "kube-apiserver-ha-556988-m02" [1fc44835-ea0a-40c3-8042-f1b7e4c5c317] Running
	I1016 18:59:38.413406  337340 system_pods.go:89] "kube-apiserver-ha-556988-m03" [4c29b8ab-29b7-4dbb-8c29-18837ac4113e] Running
	I1016 18:59:38.413413  337340 system_pods.go:89] "kube-controller-manager-ha-556988" [cc4765f2-5a4b-44ce-b5da-77313d0027c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:59:38.413425  337340 system_pods.go:89] "kube-controller-manager-ha-556988-m02" [5a169a8b-1028-4629-a4b9-9cad3c765757] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:59:38.413430  337340 system_pods.go:89] "kube-controller-manager-ha-556988-m03" [ec16f7f4-acee-4d97-8cf3-20c0f326b08b] Running
	I1016 18:59:38.413435  337340 system_pods.go:89] "kube-proxy-2j2kg" [26525910-8639-4ca0-a113-d428683bd112] Running
	I1016 18:59:38.413440  337340 system_pods.go:89] "kube-proxy-dqhtm" [eee1ee0e-f145-4298-afe6-1ca41a084680] Running
	I1016 18:59:38.413444  337340 system_pods.go:89] "kube-proxy-l2lf6" [b32400f6-5ec6-4a22-87fc-4b9fb8b25976] Running
	I1016 18:59:38.413456  337340 system_pods.go:89] "kube-proxy-mx9hc" [64ee00b3-06f0-4db8-91a2-cb2bb4b25b64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1016 18:59:38.413467  337340 system_pods.go:89] "kube-scheduler-ha-556988" [37cb1ddb-9782-4e54-9793-8f2a07fe78e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:59:38.413474  337340 system_pods.go:89] "kube-scheduler-ha-556988-m02" [d819d0c4-766f-44c5-8bb9-b8f35e3d8d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:59:38.413486  337340 system_pods.go:89] "kube-scheduler-ha-556988-m03" [33286dd3-5abd-484d-abbb-8cb29c08d3ee] Running
	I1016 18:59:38.413491  337340 system_pods.go:89] "kube-vip-ha-556988" [0c7ea0da-ea3e-4fff-a76c-98b473255af9] Running
	I1016 18:59:38.413495  337340 system_pods.go:89] "kube-vip-ha-556988-m02" [850d312a-8987-4b0f-bb9e-a393a24d9b49] Running
	I1016 18:59:38.413498  337340 system_pods.go:89] "kube-vip-ha-556988-m03" [85c7549d-c836-473b-916a-e4091d8daaa4] Running
	I1016 18:59:38.413502  337340 system_pods.go:89] "storage-provisioner" [916b69a5-8ee0-43ee-87fd-9a88caebbec8] Running
	I1016 18:59:38.413515  337340 system_pods.go:126] duration metric: took 6.570484ms to wait for k8s-apps to be running ...
	I1016 18:59:38.413533  337340 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 18:59:38.413612  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:59:38.430123  337340 system_svc.go:56] duration metric: took 16.57935ms WaitForService to wait for kubelet
	I1016 18:59:38.430164  337340 kubeadm.go:586] duration metric: took 27.163079108s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:59:38.430184  337340 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:59:38.453899  337340 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 18:59:38.453938  337340 node_conditions.go:123] node cpu capacity is 2
	I1016 18:59:38.453950  337340 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 18:59:38.453964  337340 node_conditions.go:123] node cpu capacity is 2
	I1016 18:59:38.453969  337340 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 18:59:38.453977  337340 node_conditions.go:123] node cpu capacity is 2
	I1016 18:59:38.453981  337340 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 18:59:38.453986  337340 node_conditions.go:123] node cpu capacity is 2
	I1016 18:59:38.453993  337340 node_conditions.go:105] duration metric: took 23.803362ms to run NodePressure ...
	I1016 18:59:38.454005  337340 start.go:241] waiting for startup goroutines ...
	I1016 18:59:38.454041  337340 start.go:255] writing updated cluster config ...
	I1016 18:59:38.457719  337340 out.go:203] 
	I1016 18:59:38.460987  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:38.461187  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:38.464790  337340 out.go:179] * Starting "ha-556988-m03" control-plane node in "ha-556988" cluster
	I1016 18:59:38.468557  337340 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:59:38.471645  337340 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:59:38.474579  337340 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:59:38.474688  337340 cache.go:58] Caching tarball of preloaded images
	I1016 18:59:38.474647  337340 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:59:38.475030  337340 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 18:59:38.475073  337340 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:59:38.475235  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:38.500130  337340 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:59:38.500149  337340 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:59:38.500163  337340 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:59:38.500186  337340 start.go:360] acquireMachinesLock for ha-556988-m03: {Name:mk34d9a60e195460efb0e14fede3a8b24d8e28a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:59:38.500240  337340 start.go:364] duration metric: took 38.999µs to acquireMachinesLock for "ha-556988-m03"
	I1016 18:59:38.500259  337340 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:59:38.500264  337340 fix.go:54] fixHost starting: m03
	I1016 18:59:38.500516  337340 cli_runner.go:164] Run: docker container inspect ha-556988-m03 --format={{.State.Status}}
	I1016 18:59:38.520771  337340 fix.go:112] recreateIfNeeded on ha-556988-m03: state=Stopped err=<nil>
	W1016 18:59:38.520796  337340 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:59:38.523984  337340 out.go:252] * Restarting existing docker container for "ha-556988-m03" ...
	I1016 18:59:38.524069  337340 cli_runner.go:164] Run: docker start ha-556988-m03
	I1016 18:59:38.865706  337340 cli_runner.go:164] Run: docker container inspect ha-556988-m03 --format={{.State.Status}}
	I1016 18:59:38.891919  337340 kic.go:430] container "ha-556988-m03" state is running.
	I1016 18:59:38.895965  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m03
	I1016 18:59:38.924344  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:38.924714  337340 machine.go:93] provisionDockerMachine start ...
	I1016 18:59:38.924805  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:38.953535  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:38.953854  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1016 18:59:38.954163  337340 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:59:38.955105  337340 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 18:59:42.156520  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988-m03
	
	I1016 18:59:42.156559  337340 ubuntu.go:182] provisioning hostname "ha-556988-m03"
	I1016 18:59:42.156649  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:42.195862  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:42.196197  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1016 18:59:42.196217  337340 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-556988-m03 && echo "ha-556988-m03" | sudo tee /etc/hostname
	I1016 18:59:42.415761  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988-m03
	
	I1016 18:59:42.415927  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:42.448329  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:42.448631  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1016 18:59:42.448648  337340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-556988-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-556988-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-556988-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:59:42.655633  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:59:42.655699  337340 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 18:59:42.655755  337340 ubuntu.go:190] setting up certificates
	I1016 18:59:42.655798  337340 provision.go:84] configureAuth start
	I1016 18:59:42.655888  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m03
	I1016 18:59:42.682731  337340 provision.go:143] copyHostCerts
	I1016 18:59:42.682774  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:59:42.682809  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 18:59:42.682816  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:59:42.682894  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 18:59:42.683003  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:59:42.683029  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 18:59:42.683034  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:59:42.683063  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 18:59:42.683113  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:59:42.683134  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 18:59:42.683138  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:59:42.683162  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 18:59:42.683208  337340 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.ha-556988-m03 san=[127.0.0.1 192.168.49.4 ha-556988-m03 localhost minikube]
	I1016 18:59:42.986072  337340 provision.go:177] copyRemoteCerts
	I1016 18:59:42.986191  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:59:42.986266  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:43.009339  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:59:43.190424  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1016 18:59:43.190488  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 18:59:43.234240  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1016 18:59:43.234303  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1016 18:59:43.271524  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1016 18:59:43.271634  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1016 18:59:43.309031  337340 provision.go:87] duration metric: took 653.205044ms to configureAuth
	I1016 18:59:43.309101  337340 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:59:43.309396  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:43.309551  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:43.341419  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:43.341745  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1016 18:59:43.341761  337340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:59:43.818670  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:59:43.818698  337340 machine.go:96] duration metric: took 4.89396612s to provisionDockerMachine
	I1016 18:59:43.818717  337340 start.go:293] postStartSetup for "ha-556988-m03" (driver="docker")
	I1016 18:59:43.818729  337340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:59:43.818800  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:59:43.818847  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:43.843907  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:59:43.949206  337340 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:59:43.952687  337340 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:59:43.952714  337340 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:59:43.952725  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 18:59:43.952777  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 18:59:43.952858  337340 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 18:59:43.952870  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /etc/ssl/certs/2903122.pem
	I1016 18:59:43.952966  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:59:43.960926  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:59:43.978806  337340 start.go:296] duration metric: took 160.073239ms for postStartSetup
	I1016 18:59:43.978931  337340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:59:43.979022  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:43.996302  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:59:44.105727  337340 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:59:44.111903  337340 fix.go:56] duration metric: took 5.611630616s for fixHost
	I1016 18:59:44.111982  337340 start.go:83] releasing machines lock for "ha-556988-m03", held for 5.611732928s
	I1016 18:59:44.112098  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m03
	I1016 18:59:44.134145  337340 out.go:179] * Found network options:
	I1016 18:59:44.137067  337340 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1016 18:59:44.139998  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	W1016 18:59:44.140032  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	W1016 18:59:44.140058  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	W1016 18:59:44.140075  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	I1016 18:59:44.140162  337340 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:59:44.140230  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:44.140496  337340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:59:44.140567  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:44.164491  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:59:44.165069  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:59:44.454001  337340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:59:44.465509  337340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:59:44.465581  337340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:59:44.480708  337340 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:59:44.480733  337340 start.go:495] detecting cgroup driver to use...
	I1016 18:59:44.480764  337340 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 18:59:44.480811  337340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:59:44.509331  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:59:44.557844  337340 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:59:44.557910  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:59:44.588703  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:59:44.608697  337340 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:59:44.891467  337340 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:59:45.246520  337340 docker.go:234] disabling docker service ...
	I1016 18:59:45.246692  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:59:45.273127  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:59:45.348286  337340 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:59:45.631385  337340 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:59:45.856092  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:59:45.872650  337340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:59:45.898496  337340 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:59:45.898570  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.916170  337340 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 18:59:45.916240  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.931066  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.942127  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.952558  337340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:59:45.963182  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.973482  337340 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.986310  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.996358  337340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:59:46.016551  337340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:59:46.027307  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:59:46.234905  337340 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 19:01:16.580381  337340 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.345368285s)
	I1016 19:01:16.580410  337340 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 19:01:16.580469  337340 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 19:01:16.585512  337340 start.go:563] Will wait 60s for crictl version
	I1016 19:01:16.585597  337340 ssh_runner.go:195] Run: which crictl
	I1016 19:01:16.589679  337340 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 19:01:16.622370  337340 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 19:01:16.622451  337340 ssh_runner.go:195] Run: crio --version
	I1016 19:01:16.658490  337340 ssh_runner.go:195] Run: crio --version
	I1016 19:01:16.704130  337340 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 19:01:16.707094  337340 out.go:179]   - env NO_PROXY=192.168.49.2
	I1016 19:01:16.709928  337340 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1016 19:01:16.713018  337340 cli_runner.go:164] Run: docker network inspect ha-556988 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:01:16.729609  337340 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1016 19:01:16.733845  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:01:16.745323  337340 mustload.go:65] Loading cluster: ha-556988
	I1016 19:01:16.745573  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:01:16.745830  337340 cli_runner.go:164] Run: docker container inspect ha-556988 --format={{.State.Status}}
	I1016 19:01:16.768218  337340 host.go:66] Checking if "ha-556988" exists ...
	I1016 19:01:16.768499  337340 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988 for IP: 192.168.49.4
	I1016 19:01:16.768516  337340 certs.go:195] generating shared ca certs ...
	I1016 19:01:16.768531  337340 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:01:16.768657  337340 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 19:01:16.768700  337340 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 19:01:16.768712  337340 certs.go:257] generating profile certs ...
	I1016 19:01:16.768792  337340 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key
	I1016 19:01:16.768863  337340 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.a8cc042e
	I1016 19:01:16.768908  337340 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key
	I1016 19:01:16.768921  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1016 19:01:16.768935  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1016 19:01:16.768951  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1016 19:01:16.768967  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1016 19:01:16.768979  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1016 19:01:16.768993  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1016 19:01:16.769005  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1016 19:01:16.769021  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1016 19:01:16.769073  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 19:01:16.769107  337340 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 19:01:16.769120  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 19:01:16.769171  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 19:01:16.769198  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 19:01:16.769219  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 19:01:16.769266  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:01:16.769303  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /usr/share/ca-certificates/2903122.pem
	I1016 19:01:16.769321  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:01:16.769333  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem -> /usr/share/ca-certificates/290312.pem
	I1016 19:01:16.769395  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 19:01:16.790995  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 19:01:16.889480  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1016 19:01:16.893451  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1016 19:01:16.901926  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1016 19:01:16.905634  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1016 19:01:16.914578  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1016 19:01:16.918356  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1016 19:01:16.926812  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1016 19:01:16.930535  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1016 19:01:16.940123  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1016 19:01:16.944094  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1016 19:01:16.953660  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1016 19:01:16.957601  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1016 19:01:16.966798  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 19:01:16.985414  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 19:01:17.016239  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 19:01:17.039046  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 19:01:17.060181  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1016 19:01:17.080570  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 19:01:17.105243  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 19:01:17.127158  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 19:01:17.146687  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 19:01:17.165827  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 19:01:17.185097  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 19:01:17.205538  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1016 19:01:17.220414  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1016 19:01:17.233996  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1016 19:01:17.248515  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1016 19:01:17.264946  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1016 19:01:17.279635  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1016 19:01:17.293984  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1016 19:01:17.308573  337340 ssh_runner.go:195] Run: openssl version
	I1016 19:01:17.315622  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 19:01:17.326067  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 19:01:17.330066  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 19:01:17.330132  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 19:01:17.373334  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 19:01:17.382328  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 19:01:17.393741  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:01:17.398032  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:01:17.398108  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:01:17.446048  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 19:01:17.454686  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 19:01:17.471186  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 19:01:17.475661  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 19:01:17.475768  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 19:01:17.543984  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 19:01:17.583902  337340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 19:01:17.596353  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 19:01:17.693798  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 19:01:17.818221  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 19:01:17.876853  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 19:01:17.929859  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 19:01:18.028781  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 19:01:18.102665  337340 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1016 19:01:18.102853  337340 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-556988-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 19:01:18.102905  337340 kube-vip.go:115] generating kube-vip config ...
	I1016 19:01:18.102986  337340 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1016 19:01:18.130313  337340 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1016 19:01:18.130424  337340 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1016 19:01:18.130517  337340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 19:01:18.145569  337340 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 19:01:18.145719  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1016 19:01:18.158741  337340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1016 19:01:18.175520  337340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 19:01:18.201069  337340 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1016 19:01:18.223378  337340 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1016 19:01:18.230855  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:01:18.262619  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:01:18.515974  337340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:01:18.534144  337340 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:01:18.534496  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:01:18.537694  337340 out.go:179] * Verifying Kubernetes components...
	I1016 19:01:18.540519  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:01:18.853344  337340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:01:18.870280  337340 kapi.go:59] client config for ha-556988: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key", CAFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1016 19:01:18.870409  337340 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1016 19:01:18.870686  337340 node_ready.go:35] waiting up to 6m0s for node "ha-556988-m03" to be "Ready" ...
	W1016 19:01:20.874310  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:22.875099  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:24.875540  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:27.374249  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:29.375013  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:31.874737  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:34.373989  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:36.375778  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:38.874593  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:40.874828  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:42.875042  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:45.378712  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:47.875029  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:49.875081  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:52.374191  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:54.374870  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:56.874176  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:58.874680  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:00.875335  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:03.374728  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:05.874729  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:07.874820  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:10.374640  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:12.374741  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:14.375254  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:16.874287  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:19.375567  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:21.874303  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:24.374724  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:26.874201  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:28.875139  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:30.875913  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:32.876533  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:35.374093  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:37.374317  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:39.873972  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:41.874678  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:44.374313  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:46.374843  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:48.375268  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:50.874442  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:52.874670  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:54.876042  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:57.374242  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:59.374764  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:01.375629  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:03.874090  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:05.874933  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:07.874988  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:10.375278  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:12.875217  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:15.374125  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:17.374601  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:19.874402  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:21.874761  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:24.373999  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:26.374333  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:28.374800  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:30.375182  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:32.874199  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:34.875038  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:37.374178  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:39.374897  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:41.376724  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:43.875074  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:45.875991  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:48.374682  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:50.374756  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:52.874361  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:54.874691  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:57.375643  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:59.874852  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:02.374714  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:04.874203  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:07.375099  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:09.874992  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:12.375032  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:14.874592  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:17.374337  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:19.375719  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:21.874855  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:23.875005  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:26.374357  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:28.874350  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:31.374814  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:33.375229  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:35.376366  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:37.875161  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:40.374398  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:42.375093  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:44.375288  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:46.874677  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:49.374853  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:51.874402  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:53.874728  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:56.374314  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:58.374922  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:00.398713  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:02.874327  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:04.875407  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:07.374991  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:09.375065  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:11.874375  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:13.875021  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:15.875906  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:18.374204  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:20.375019  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:22.874356  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:24.874622  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:26.874889  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:29.374262  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:31.375054  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:33.408848  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:35.874199  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:37.874785  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:39.875878  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:42.374064  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:44.374403  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:46.874583  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:49.375025  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:51.875263  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:54.374635  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:56.374838  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:58.874718  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:01.374046  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:03.874734  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:06.374348  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:08.874846  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:10.875133  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:13.373809  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:15.374383  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:17.374643  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:19.375329  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:21.874529  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:23.874845  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:26.374245  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:28.874069  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:30.874264  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:32.874477  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:35.374326  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:37.874249  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:39.874482  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:41.875383  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:44.374077  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:46.374372  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:48.874600  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:50.874741  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:53.375464  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:55.875061  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:58.374676  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:00.377657  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:02.384684  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:04.874707  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:06.875283  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:09.374694  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:11.874370  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:14.375095  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:16.874880  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	I1016 19:07:18.870877  337340 node_ready.go:38] duration metric: took 6m0.000146858s for node "ha-556988-m03" to be "Ready" ...
	I1016 19:07:18.873970  337340 out.go:203] 
	W1016 19:07:18.876680  337340 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1016 19:07:18.876697  337340 out.go:285] * 
	W1016 19:07:18.878873  337340 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 19:07:18.881589  337340 out.go:203] 
	
	
	==> CRI-O <==
	Oct 16 18:59:36 ha-556988 crio[667]: time="2025-10-16T18:59:36.033008604Z" level=info msg="Started container" PID=1192 containerID=668681e0d58e70e2edf23bedf32d99282f6a8c38b0aad26000be1021582b8b56 description=default/busybox-7b57f96db7-8m2wv/busybox id=e73f877a-ee31-407d-ac4c-a34a4abcd363 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b5419232b288e867bd15afc6e090129eb958d9e64a346ef88df56d1130e998f
	Oct 16 19:00:06 ha-556988 conmon[1141]: conmon ee0dc742d47b892b93ac <ninfo>: container 1150 exited with status 1
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.415993438Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=75767156-3fb6-42b4-95e2-d34aa2a5bea8 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.41793089Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8b5f67f6-e1d4-4af2-88c2-48fa40df96aa name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.419946292Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=58e00405-99c8-449e-a3ad-5392da1ae41a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.42034836Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.428022662Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.428394313Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3db4041b6d3bc223822867a19715c3e66ed2c364c6b3187c2a59cc7adbe12ade/merged/etc/passwd: no such file or directory"
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.428502664Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3db4041b6d3bc223822867a19715c3e66ed2c364c6b3187c2a59cc7adbe12ade/merged/etc/group: no such file or directory"
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.431213384Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.460374693Z" level=info msg="Created container e24f8a6878f298558b57ff3af4fc74fbb0b1169f9fd531dd73d4e9fdb9db8ec3: kube-system/storage-provisioner/storage-provisioner" id=58e00405-99c8-449e-a3ad-5392da1ae41a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.469592921Z" level=info msg="Starting container: e24f8a6878f298558b57ff3af4fc74fbb0b1169f9fd531dd73d4e9fdb9db8ec3" id=2b8bafce-4d00-4a8d-8c2a-a4b19468c0be name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.472182538Z" level=info msg="Started container" PID=1395 containerID=e24f8a6878f298558b57ff3af4fc74fbb0b1169f9fd531dd73d4e9fdb9db8ec3 description=kube-system/storage-provisioner/storage-provisioner id=2b8bafce-4d00-4a8d-8c2a-a4b19468c0be name=/runtime.v1.RuntimeService/StartContainer sandboxID=3100d564efc4cf0ded67a741f8ebf6a46eeb48236dd12f0b244aa7eb0e1041e1
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.166222167Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.169795977Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.16983204Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.169854342Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.173639915Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.173676863Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.173701159Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.176974688Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.177010775Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.177034324Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.180287168Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.180322968Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	e24f8a6878f29       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       2                   3100d564efc4c       storage-provisioner                 kube-system
	668681e0d58e7       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   3b5419232b288       busybox-7b57f96db7-8m2wv            default
	ee0dc742d47b8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       1                   3100d564efc4c       storage-provisioner                 kube-system
	d2ef4f1c6fd3d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   f62a65ca971ca       coredns-66bc5c9577-bg5gf            kube-system
	fa4be697bf069       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   9a193d0046bea       kindnet-c5vhh                       kube-system
	9f54a6f37bdff       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   219f0f758c58e       coredns-66bc5c9577-qnwbz            kube-system
	676cc3096c2c4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   7 minutes ago       Running             kube-controller-manager   2                   2f36988f94206       kube-controller-manager-ha-556988   kube-system
	66e732aebd424       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   2bc6a25bda869       kube-proxy-l2lf6                    kube-system
	a6a97464c4b58       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  0                   d09d9e3f4595d       kube-vip-ha-556988                  kube-system
	37de0677d0291       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Running             kube-apiserver            1                   ff19c20039a2e       kube-apiserver-ha-556988            kube-system
	13005c03c7e83       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Exited              kube-controller-manager   1                   2f36988f94206       kube-controller-manager-ha-556988   kube-system
	ccd1663977e23       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   16edb5468bfd8       etcd-ha-556988                      kube-system
	0947527fb7c66       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   9953eab01a12a       kube-scheduler-ha-556988            kube-system
	
	
	==> coredns [9f54a6f37bdffe68140f1859804fc0edaf64ea559a101f6caf876000479c9ee1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60434 - 54918 "HINFO IN 3143784560746213008.1236521785684304278. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01077593s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d2ef4f1c6fd3dddc27aea4bdc4cf4ce1714f112fa6b015df816ae128c747014c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37299 - 23942 "HINFO IN 3089919825197669795.1270930252494634912. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013048437s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-556988
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-556988
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=ha-556988
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_53_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:53:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-556988
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:07:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:05:32 +0000   Thu, 16 Oct 2025 18:53:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:05:32 +0000   Thu, 16 Oct 2025 18:53:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:05:32 +0000   Thu, 16 Oct 2025 18:53:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 19:05:32 +0000   Thu, 16 Oct 2025 18:59:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-556988
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                b59e7c71-f015-4beb-a0b1-1db2d92a9291
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-8m2wv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-bg5gf             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-66bc5c9577-qnwbz             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-556988                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-c5vhh                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-556988             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-556988    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-l2lf6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-556988             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-556988                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m45s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m43s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x9 over 13m)      kubelet          Node ha-556988 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-556988 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-556988 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-556988 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-556988 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-556988 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           13m                    node-controller  Node ha-556988 event: Registered Node ha-556988 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-556988 event: Registered Node ha-556988 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-556988 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-556988 event: Registered Node ha-556988 in Controller
	  Normal   RegisteredNode           8m48s                  node-controller  Node ha-556988 event: Registered Node ha-556988 in Controller
	  Normal   NodeHasSufficientMemory  8m21s (x8 over 8m21s)  kubelet          Node ha-556988 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m21s (x8 over 8m21s)  kubelet          Node ha-556988 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m21s (x8 over 8m21s)  kubelet          Node ha-556988 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m41s                  node-controller  Node ha-556988 event: Registered Node ha-556988 in Controller
	  Normal   RegisteredNode           7m37s                  node-controller  Node ha-556988 event: Registered Node ha-556988 in Controller
	
	
	Name:               ha-556988-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-556988-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=ha-556988
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_16T18_54_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:54:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-556988-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:07:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:07:17 +0000   Thu, 16 Oct 2025 18:58:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:07:17 +0000   Thu, 16 Oct 2025 18:58:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:07:17 +0000   Thu, 16 Oct 2025 18:58:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 19:07:17 +0000   Thu, 16 Oct 2025 18:58:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-556988-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                7a9bc276-8208-4c5e-a8a7-151b962ba6f2
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-g6s82                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-556988-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-9mrmf                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-556988-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-556988-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-mx9hc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-556988-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-556988-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 7m16s                  kube-proxy       
	  Normal   RegisteredNode           12m                    node-controller  Node ha-556988-m02 event: Registered Node ha-556988-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-556988-m02 event: Registered Node ha-556988-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-556988-m02 event: Registered Node ha-556988-m02 in Controller
	  Warning  CgroupV1                 9m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 9m26s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     9m25s (x8 over 9m26s)  kubelet          Node ha-556988-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  9m25s (x8 over 9m26s)  kubelet          Node ha-556988-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m25s (x8 over 9m26s)  kubelet          Node ha-556988-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeNotReady             8m59s                  node-controller  Node ha-556988-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           8m48s                  node-controller  Node ha-556988-m02 event: Registered Node ha-556988-m02 in Controller
	  Normal   Starting                 8m18s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m18s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m17s (x8 over 8m17s)  kubelet          Node ha-556988-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m17s (x8 over 8m17s)  kubelet          Node ha-556988-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m17s (x8 over 8m17s)  kubelet          Node ha-556988-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m41s                  node-controller  Node ha-556988-m02 event: Registered Node ha-556988-m02 in Controller
	  Normal   RegisteredNode           7m37s                  node-controller  Node ha-556988-m02 event: Registered Node ha-556988-m02 in Controller
	
	
	Name:               ha-556988-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-556988-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=ha-556988
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_16T18_55_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:55:18 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-556988-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:58:21 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 16 Oct 2025 18:56:40 +0000   Thu, 16 Oct 2025 19:00:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 16 Oct 2025 18:56:40 +0000   Thu, 16 Oct 2025 19:00:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 16 Oct 2025 18:56:40 +0000   Thu, 16 Oct 2025 19:00:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 16 Oct 2025 18:56:40 +0000   Thu, 16 Oct 2025 19:00:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-556988-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                15435c47-1558-4a71-8111-15190f95190c
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-zdc2h                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-556988-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-qj4cl                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-556988-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-556988-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-dqhtm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-556988-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-556988-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        11m    kube-proxy       
	  Normal  RegisteredNode  12m    node-controller  Node ha-556988-m03 event: Registered Node ha-556988-m03 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-556988-m03 event: Registered Node ha-556988-m03 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-556988-m03 event: Registered Node ha-556988-m03 in Controller
	  Normal  RegisteredNode  8m48s  node-controller  Node ha-556988-m03 event: Registered Node ha-556988-m03 in Controller
	  Normal  RegisteredNode  7m41s  node-controller  Node ha-556988-m03 event: Registered Node ha-556988-m03 in Controller
	  Normal  RegisteredNode  7m37s  node-controller  Node ha-556988-m03 event: Registered Node ha-556988-m03 in Controller
	  Normal  NodeNotReady    6m51s  node-controller  Node ha-556988-m03 status is now: NodeNotReady
	
	
	Name:               ha-556988-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-556988-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=ha-556988
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_16T18_56_35_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:56:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-556988-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:58:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 16 Oct 2025 18:57:16 +0000   Thu, 16 Oct 2025 19:00:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 16 Oct 2025 18:57:16 +0000   Thu, 16 Oct 2025 19:00:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 16 Oct 2025 18:57:16 +0000   Thu, 16 Oct 2025 19:00:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 16 Oct 2025 18:57:16 +0000   Thu, 16 Oct 2025 19:00:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-556988-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                3974a7c6-147c-48e8-b522-87d967a9ed5f
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-flq9x       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-2j2kg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-556988-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-556988-m04 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-556988-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                node-controller  Node ha-556988-m04 event: Registered Node ha-556988-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-556988-m04 event: Registered Node ha-556988-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-556988-m04 event: Registered Node ha-556988-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-556988-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m48s              node-controller  Node ha-556988-m04 event: Registered Node ha-556988-m04 in Controller
	  Normal   RegisteredNode           7m41s              node-controller  Node ha-556988-m04 event: Registered Node ha-556988-m04 in Controller
	  Normal   RegisteredNode           7m37s              node-controller  Node ha-556988-m04 event: Registered Node ha-556988-m04 in Controller
	  Normal   NodeNotReady             6m51s              node-controller  Node ha-556988-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.510048] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035217] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.777829] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.353148] kauditd_printk_skb: 36 callbacks suppressed
	[Oct16 17:39] FS-Cache: Duplicate cookie detected
	[  +0.000746] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001056] FS-Cache: O-cookie d=00000000a1708097{9P.session} n=00000000c48db394
	[  +0.001150] FS-Cache: O-key=[10] '34323935323233313231'
	[  +0.000794] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000971] FS-Cache: N-cookie d=00000000a1708097{9P.session} n=0000000008f2874d
	[  +0.001104] FS-Cache: N-key=[10] '34323935323233313231'
	[Oct16 17:40] hrtimer: interrupt took 46683506 ns
	[Oct16 18:30] kauditd_printk_skb: 8 callbacks suppressed
	[Oct16 18:32] overlayfs: idmapped layers are currently not supported
	[  +0.067059] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct16 18:38] overlayfs: idmapped layers are currently not supported
	[Oct16 18:39] overlayfs: idmapped layers are currently not supported
	[Oct16 18:53] overlayfs: idmapped layers are currently not supported
	[Oct16 18:54] overlayfs: idmapped layers are currently not supported
	[Oct16 18:55] overlayfs: idmapped layers are currently not supported
	[Oct16 18:56] overlayfs: idmapped layers are currently not supported
	[Oct16 18:57] overlayfs: idmapped layers are currently not supported
	[Oct16 18:58] overlayfs: idmapped layers are currently not supported
	[Oct16 18:59] overlayfs: idmapped layers are currently not supported
	[ +38.025144] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ccd1663977e230bbda3cae69e035a19bb725c3f88efd4340e2acdb82e35b17b4] <==
	{"level":"warn","ts":"2025-10-16T19:01:00.137726Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"dd9f3debc3328b7e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-16T19:01:00.137921Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"dd9f3debc3328b7e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-16T19:01:01.224657Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"dd9f3debc3328b7e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-16T19:01:01.224722Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"dd9f3debc3328b7e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-16T19:01:05.138369Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"dd9f3debc3328b7e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-16T19:01:05.138386Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"dd9f3debc3328b7e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-16T19:01:05.225977Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"dd9f3debc3328b7e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-16T19:01:05.226029Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"dd9f3debc3328b7e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-16T19:01:09.227379Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"dd9f3debc3328b7e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-16T19:01:09.227439Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"dd9f3debc3328b7e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-16T19:01:10.139014Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"dd9f3debc3328b7e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-16T19:01:10.139006Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"dd9f3debc3328b7e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-16T19:01:13.228727Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"dd9f3debc3328b7e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-16T19:01:13.228783Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"dd9f3debc3328b7e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-16T19:01:15.139951Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"dd9f3debc3328b7e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-16T19:01:15.139974Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"dd9f3debc3328b7e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-16T19:01:17.230430Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"dd9f3debc3328b7e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-16T19:01:17.230561Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"dd9f3debc3328b7e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-10-16T19:01:17.849325Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"dd9f3debc3328b7e","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-10-16T19:01:17.849470Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"dd9f3debc3328b7e"}
	{"level":"info","ts":"2025-10-16T19:01:17.849512Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"info","ts":"2025-10-16T19:01:17.904504Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"dd9f3debc3328b7e","stream-type":"stream Message"}
	{"level":"info","ts":"2025-10-16T19:01:17.904633Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"info","ts":"2025-10-16T19:01:17.939403Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"info","ts":"2025-10-16T19:01:17.945273Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"dd9f3debc3328b7e"}
	
	
	==> kernel <==
	 19:07:20 up  1:49,  0 user,  load average: 0.24, 0.98, 1.50
	Linux ha-556988 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fa4be697bf0693026672a5f6c9fe73e79415080f58163a0e09e3473403170716] <==
	I1016 19:06:46.160020       1 main.go:324] Node ha-556988-m04 has CIDR [10.244.3.0/24] 
	I1016 19:06:56.160123       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1016 19:06:56.160161       1 main.go:324] Node ha-556988-m04 has CIDR [10.244.3.0/24] 
	I1016 19:06:56.160372       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 19:06:56.160388       1 main.go:301] handling current node
	I1016 19:06:56.160401       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1016 19:06:56.160406       1 main.go:324] Node ha-556988-m02 has CIDR [10.244.1.0/24] 
	I1016 19:06:56.160461       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1016 19:06:56.160473       1 main.go:324] Node ha-556988-m03 has CIDR [10.244.2.0/24] 
	I1016 19:07:06.166319       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 19:07:06.166355       1 main.go:301] handling current node
	I1016 19:07:06.166371       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1016 19:07:06.166377       1 main.go:324] Node ha-556988-m02 has CIDR [10.244.1.0/24] 
	I1016 19:07:06.166532       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1016 19:07:06.166546       1 main.go:324] Node ha-556988-m03 has CIDR [10.244.2.0/24] 
	I1016 19:07:06.166618       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1016 19:07:06.166629       1 main.go:324] Node ha-556988-m04 has CIDR [10.244.3.0/24] 
	I1016 19:07:16.159868       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1016 19:07:16.159905       1 main.go:324] Node ha-556988-m04 has CIDR [10.244.3.0/24] 
	I1016 19:07:16.160092       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 19:07:16.160107       1 main.go:301] handling current node
	I1016 19:07:16.160120       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1016 19:07:16.160126       1 main.go:324] Node ha-556988-m02 has CIDR [10.244.1.0/24] 
	I1016 19:07:16.160187       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1016 19:07:16.160200       1 main.go:324] Node ha-556988-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [37de0677d02917c07b70727749f73f2b0b33bfa000e9e137a54da309d14e7ae7] <==
	I1016 18:59:34.894194       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1016 18:59:34.896075       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1016 18:59:34.896820       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3 192.168.49.4]
	I1016 18:59:34.911822       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1016 18:59:34.911849       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1016 18:59:34.920290       1 cache.go:39] Caches are synced for autoregister controller
	I1016 18:59:34.943461       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1016 18:59:34.950382       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1016 18:59:34.950416       1 policy_source.go:240] refreshing policies
	I1016 18:59:34.957319       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 18:59:34.959365       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1016 18:59:34.965217       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 18:59:34.965371       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1016 18:59:34.971502       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1016 18:59:35.000033       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 18:59:35.031357       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1016 18:59:35.038221       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1016 18:59:35.053357       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 18:59:37.014352       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1016 18:59:37.014434       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	W1016 18:59:38.259757       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3 192.168.49.4]
	I1016 18:59:40.018709       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 18:59:40.263262       1 controller.go:667] quota admission added evaluator for: deployments.apps
	W1016 18:59:58.250950       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1016 19:00:04.488288       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [13005c03c7e831233e329dc3df5f63331cf23a4ab71c78d67d200baaff30b9bf] <==
	I1016 18:59:02.476495       1 serving.go:386] Generated self-signed cert in-memory
	I1016 18:59:04.091611       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1016 18:59:04.091720       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:59:04.093637       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1016 18:59:04.094321       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1016 18:59:04.094476       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 18:59:04.094572       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1016 18:59:20.022685       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [676cc3096c2c428c05ab34bcbe56aece39203ffe11f9216bd113fe47eebe8d46] <==
	I1016 18:59:39.953791       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-556988-m03"
	I1016 18:59:39.955638       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1016 18:59:39.954135       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-556988-m04"
	I1016 18:59:39.955915       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1016 18:59:39.956300       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1016 18:59:39.956794       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1016 18:59:39.958437       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1016 18:59:39.958540       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1016 18:59:39.958616       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1016 18:59:39.958670       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1016 18:59:39.958704       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1016 18:59:39.958656       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1016 18:59:39.964429       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 18:59:39.964622       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1016 18:59:39.970819       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1016 18:59:39.972126       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1016 18:59:39.973735       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1016 18:59:39.980202       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 18:59:39.983741       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 18:59:39.983826       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 18:59:39.983857       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 18:59:39.984304       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1016 18:59:39.988621       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:05:33.126031       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-zdc2h"
	E1016 19:05:33.383262       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-proxy [66e732aebd424e1c2b5fe5fa62678b4f60db51b175af2e4bdf9c05d13a3604b1] <==
	I1016 18:59:36.431382       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:59:37.074112       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:59:37.404317       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:59:37.420237       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1016 18:59:37.440936       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:59:37.547567       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:59:37.547677       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:59:37.566424       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:59:37.566839       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:59:37.567055       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:59:37.568313       1 config.go:200] "Starting service config controller"
	I1016 18:59:37.569180       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:59:37.569272       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:59:37.569301       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:59:37.569349       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:59:37.569432       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:59:37.570116       1 config.go:309] "Starting node config controller"
	I1016 18:59:37.593325       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:59:37.593349       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:59:37.670251       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 18:59:37.670355       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 18:59:37.670385       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0947527fb7c6600575f80d864636e177c1330efa7ab3caff116116cd0d07fe91] <==
	E1016 18:59:19.210127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 18:59:20.223711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 18:59:20.272552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:59:20.286900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 18:59:21.024708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1016 18:59:23.850262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 18:59:25.366156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 18:59:25.440106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 18:59:25.455207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 18:59:25.526976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 18:59:25.693902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 18:59:25.715863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 18:59:26.150506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 18:59:26.525981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 18:59:27.199538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 18:59:27.780409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 18:59:28.329859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 18:59:28.766926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 18:59:29.490851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 18:59:29.827336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:59:30.023162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 18:59:30.629590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 18:59:31.265247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 18:59:33.627332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1016 18:59:46.572262       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.941000     803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b32400f6-5ec6-4a22-87fc-4b9fb8b25976-lib-modules\") pod \"kube-proxy-l2lf6\" (UID: \"b32400f6-5ec6-4a22-87fc-4b9fb8b25976\") " pod="kube-system/kube-proxy-l2lf6"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.941076     803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b32400f6-5ec6-4a22-87fc-4b9fb8b25976-xtables-lock\") pod \"kube-proxy-l2lf6\" (UID: \"b32400f6-5ec6-4a22-87fc-4b9fb8b25976\") " pod="kube-system/kube-proxy-l2lf6"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.941166     803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aadf11dc-a51d-4828-9ae1-0295e92d1c95-xtables-lock\") pod \"kindnet-c5vhh\" (UID: \"aadf11dc-a51d-4828-9ae1-0295e92d1c95\") " pod="kube-system/kindnet-c5vhh"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.941256     803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aadf11dc-a51d-4828-9ae1-0295e92d1c95-lib-modules\") pod \"kindnet-c5vhh\" (UID: \"aadf11dc-a51d-4828-9ae1-0295e92d1c95\") " pod="kube-system/kindnet-c5vhh"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.941277     803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/916b69a5-8ee0-43ee-87fd-9a88caebbec8-tmp\") pod \"storage-provisioner\" (UID: \"916b69a5-8ee0-43ee-87fd-9a88caebbec8\") " pod="kube-system/storage-provisioner"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.941319     803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/aadf11dc-a51d-4828-9ae1-0295e92d1c95-cni-cfg\") pod \"kindnet-c5vhh\" (UID: \"aadf11dc-a51d-4828-9ae1-0295e92d1c95\") " pod="kube-system/kindnet-c5vhh"
	Oct 16 18:59:34 ha-556988 kubelet[803]: E1016 18:59:34.964270     803 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-vip-ha-556988\" already exists" pod="kube-system/kube-vip-ha-556988"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.964316     803 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-ha-556988"
	Oct 16 18:59:34 ha-556988 kubelet[803]: E1016 18:59:34.976099     803 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-556988\" already exists" pod="kube-system/etcd-ha-556988"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.976140     803 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-556988"
	Oct 16 18:59:34 ha-556988 kubelet[803]: E1016 18:59:34.987350     803 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-556988\" already exists" pod="kube-system/kube-apiserver-ha-556988"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.987392     803 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-556988"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.999523     803 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 16 18:59:35 ha-556988 kubelet[803]: E1016 18:59:35.015087     803 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-556988\" already exists" pod="kube-system/kube-controller-manager-ha-556988"
	Oct 16 18:59:35 ha-556988 kubelet[803]: I1016 18:59:35.039384     803 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-556988"
	Oct 16 18:59:35 ha-556988 kubelet[803]: I1016 18:59:35.039591     803 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-556988"
	Oct 16 18:59:35 ha-556988 kubelet[803]: I1016 18:59:35.064156     803 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 16 18:59:35 ha-556988 kubelet[803]: I1016 18:59:35.176886     803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-556988" podStartSLOduration=0.17686523 podStartE2EDuration="176.86523ms" podCreationTimestamp="2025-10-16 18:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:59:35.14812186 +0000 UTC m=+36.282512675" watchObservedRunningTime="2025-10-16 18:59:35.17686523 +0000 UTC m=+36.311256037"
	Oct 16 18:59:35 ha-556988 kubelet[803]: I1016 18:59:35.286741     803 scope.go:117] "RemoveContainer" containerID="13005c03c7e831233e329dc3df5f63331cf23a4ab71c78d67d200baaff30b9bf"
	Oct 16 18:59:35 ha-556988 kubelet[803]: W1016 18:59:35.357678     803 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/crio-9a193d0046bea11d1febf065e134855191406dfa3aec11b726dd228067189c7b WatchSource:0}: Error finding container 9a193d0046bea11d1febf065e134855191406dfa3aec11b726dd228067189c7b: Status 404 returned error can't find the container with id 9a193d0046bea11d1febf065e134855191406dfa3aec11b726dd228067189c7b
	Oct 16 18:59:35 ha-556988 kubelet[803]: W1016 18:59:35.401613     803 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/crio-219f0f758c58e5e2e91f77c7c3e14e6652dec28447814307cca604d39430e73a WatchSource:0}: Error finding container 219f0f758c58e5e2e91f77c7c3e14e6652dec28447814307cca604d39430e73a: Status 404 returned error can't find the container with id 219f0f758c58e5e2e91f77c7c3e14e6652dec28447814307cca604d39430e73a
	Oct 16 18:59:35 ha-556988 kubelet[803]: W1016 18:59:35.717419     803 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/crio-3b5419232b288e867bd15afc6e090129eb958d9e64a346ef88df56d1130e998f WatchSource:0}: Error finding container 3b5419232b288e867bd15afc6e090129eb958d9e64a346ef88df56d1130e998f: Status 404 returned error can't find the container with id 3b5419232b288e867bd15afc6e090129eb958d9e64a346ef88df56d1130e998f
	Oct 16 18:59:59 ha-556988 kubelet[803]: E1016 18:59:59.007146     803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df58f74bd2685b895cef12ab16695cf6c46768695a2a39165b75d08991392dd9\": container with ID starting with df58f74bd2685b895cef12ab16695cf6c46768695a2a39165b75d08991392dd9 not found: ID does not exist" containerID="df58f74bd2685b895cef12ab16695cf6c46768695a2a39165b75d08991392dd9"
	Oct 16 18:59:59 ha-556988 kubelet[803]: I1016 18:59:59.007669     803 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="df58f74bd2685b895cef12ab16695cf6c46768695a2a39165b75d08991392dd9" err="rpc error: code = NotFound desc = could not find container \"df58f74bd2685b895cef12ab16695cf6c46768695a2a39165b75d08991392dd9\": container with ID starting with df58f74bd2685b895cef12ab16695cf6c46768695a2a39165b75d08991392dd9 not found: ID does not exist"
	Oct 16 19:00:06 ha-556988 kubelet[803]: I1016 19:00:06.414711     803 scope.go:117] "RemoveContainer" containerID="ee0dc742d47b892b93aca268c637f4c52645442b0c386d0be82fcedaaa23bc41"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-556988 -n ha-556988
helpers_test.go:269: (dbg) Run:  kubectl --context ha-556988 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-d75ps
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-556988 describe pod busybox-7b57f96db7-d75ps
helpers_test.go:290: (dbg) kubectl --context ha-556988 describe pod busybox-7b57f96db7-d75ps:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-d75ps
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jzmh8 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-jzmh8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  108s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  108s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (537.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 node delete m03 --alsologtostderr -v 5
E1016 19:07:24.359413  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-556988 node delete m03 --alsologtostderr -v 5: (5.493660096s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-556988 status --alsologtostderr -v 5: exit status 7 (628.493058ms)

                                                
                                                
-- stdout --
	ha-556988
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-556988-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-556988-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 19:07:27.518576  343417 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:07:27.519244  343417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:07:27.519261  343417 out.go:374] Setting ErrFile to fd 2...
	I1016 19:07:27.519267  343417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:07:27.519566  343417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:07:27.519807  343417 out.go:368] Setting JSON to false
	I1016 19:07:27.519850  343417 mustload.go:65] Loading cluster: ha-556988
	I1016 19:07:27.520278  343417 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:07:27.520298  343417 status.go:174] checking status of ha-556988 ...
	I1016 19:07:27.520869  343417 cli_runner.go:164] Run: docker container inspect ha-556988 --format={{.State.Status}}
	I1016 19:07:27.521340  343417 notify.go:220] Checking for updates...
	I1016 19:07:27.544460  343417 status.go:371] ha-556988 host status = "Running" (err=<nil>)
	I1016 19:07:27.544491  343417 host.go:66] Checking if "ha-556988" exists ...
	I1016 19:07:27.544972  343417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988
	I1016 19:07:27.580952  343417 host.go:66] Checking if "ha-556988" exists ...
	I1016 19:07:27.581459  343417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 19:07:27.581508  343417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 19:07:27.610022  343417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 19:07:27.714868  343417 ssh_runner.go:195] Run: systemctl --version
	I1016 19:07:27.721544  343417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:07:27.735020  343417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:07:27.801768  343417 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-16 19:07:27.790045331 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:07:27.802315  343417 kubeconfig.go:125] found "ha-556988" server: "https://192.168.49.254:8443"
	I1016 19:07:27.802352  343417 api_server.go:166] Checking apiserver status ...
	I1016 19:07:27.802407  343417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 19:07:27.816306  343417 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/960/cgroup
	I1016 19:07:27.825878  343417 api_server.go:182] apiserver freezer: "13:freezer:/docker/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/crio/crio-37de0677d02917c07b70727749f73f2b0b33bfa000e9e137a54da309d14e7ae7"
	I1016 19:07:27.825953  343417 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/crio/crio-37de0677d02917c07b70727749f73f2b0b33bfa000e9e137a54da309d14e7ae7/freezer.state
	I1016 19:07:27.833611  343417 api_server.go:204] freezer state: "THAWED"
	I1016 19:07:27.833636  343417 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1016 19:07:27.842052  343417 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1016 19:07:27.842080  343417 status.go:463] ha-556988 apiserver status = Running (err=<nil>)
	I1016 19:07:27.842092  343417 status.go:176] ha-556988 status: &{Name:ha-556988 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 19:07:27.842110  343417 status.go:174] checking status of ha-556988-m02 ...
	I1016 19:07:27.842418  343417 cli_runner.go:164] Run: docker container inspect ha-556988-m02 --format={{.State.Status}}
	I1016 19:07:27.864304  343417 status.go:371] ha-556988-m02 host status = "Running" (err=<nil>)
	I1016 19:07:27.864334  343417 host.go:66] Checking if "ha-556988-m02" exists ...
	I1016 19:07:27.864772  343417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m02
	I1016 19:07:27.882386  343417 host.go:66] Checking if "ha-556988-m02" exists ...
	I1016 19:07:27.882686  343417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 19:07:27.882730  343417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 19:07:27.902164  343417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 19:07:28.010083  343417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:07:28.027580  343417 kubeconfig.go:125] found "ha-556988" server: "https://192.168.49.254:8443"
	I1016 19:07:28.027607  343417 api_server.go:166] Checking apiserver status ...
	I1016 19:07:28.027660  343417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 19:07:28.039598  343417 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup
	I1016 19:07:28.048095  343417 api_server.go:182] apiserver freezer: "13:freezer:/docker/6e12d84db8a619e254ff17d31ff2d177ae34ff2a423fc1eb584d7f58217dfd45/crio/crio-25c607ba546b9d6fe63e7bdea477e684a4298acb7adc10be37dfbeb9681aea97"
	I1016 19:07:28.048167  343417 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6e12d84db8a619e254ff17d31ff2d177ae34ff2a423fc1eb584d7f58217dfd45/crio/crio-25c607ba546b9d6fe63e7bdea477e684a4298acb7adc10be37dfbeb9681aea97/freezer.state
	I1016 19:07:28.056360  343417 api_server.go:204] freezer state: "THAWED"
	I1016 19:07:28.056438  343417 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1016 19:07:28.068272  343417 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1016 19:07:28.068375  343417 status.go:463] ha-556988-m02 apiserver status = Running (err=<nil>)
	I1016 19:07:28.068400  343417 status.go:176] ha-556988-m02 status: &{Name:ha-556988-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 19:07:28.068455  343417 status.go:174] checking status of ha-556988-m04 ...
	I1016 19:07:28.068764  343417 cli_runner.go:164] Run: docker container inspect ha-556988-m04 --format={{.State.Status}}
	I1016 19:07:28.087742  343417 status.go:371] ha-556988-m04 host status = "Stopped" (err=<nil>)
	I1016 19:07:28.087771  343417 status.go:384] host is not running, skipping remaining checks
	I1016 19:07:28.087777  343417 status.go:176] ha-556988-m04 status: &{Name:ha-556988-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-arm64 -p ha-556988 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-556988
helpers_test.go:243: (dbg) docker inspect ha-556988:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000",
	        "Created": "2025-10-16T18:53:20.826320924Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 337466,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:58:51.979830748Z",
	            "FinishedAt": "2025-10-16T18:58:51.377562063Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/hosts",
	        "LogPath": "/var/lib/docker/containers/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000-json.log",
	        "Name": "/ha-556988",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-556988:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-556988",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000",
	                "LowerDir": "/var/lib/docker/overlay2/b9e7c420d869ffe9f26b11e5160a4483ad085f1084b3df4806e005b1dcac6796-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b9e7c420d869ffe9f26b11e5160a4483ad085f1084b3df4806e005b1dcac6796/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b9e7c420d869ffe9f26b11e5160a4483ad085f1084b3df4806e005b1dcac6796/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b9e7c420d869ffe9f26b11e5160a4483ad085f1084b3df4806e005b1dcac6796/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-556988",
	                "Source": "/var/lib/docker/volumes/ha-556988/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-556988",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-556988",
	                "name.minikube.sigs.k8s.io": "ha-556988",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "065c5d4e8a096d5f9ffdf9b63e7c2cb496f2eb5bb12369ce1f2bda60d9a79e64",
	            "SandboxKey": "/var/run/docker/netns/065c5d4e8a09",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-556988": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:e9:5a:29:59:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7adcf17f22baf4ae9b9dbf2b45e75904ea1540233e225aef4731989fd57a7fcc",
	                    "EndpointID": "6a0543cc77855a1155f456a458b934e2cd29f8314af96acb35727ae6ed5a96c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-556988",
	                        "ee539784e727"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-556988 -n ha-556988
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-556988 logs -n 25: (1.344139055s)
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-556988 ssh -n ha-556988-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m02 sudo cat /home/docker/cp-test_ha-556988-m03_ha-556988-m02.txt                                         │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ cp      │ ha-556988 cp ha-556988-m03:/home/docker/cp-test.txt ha-556988-m04:/home/docker/cp-test_ha-556988-m03_ha-556988-m04.txt               │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m04 sudo cat /home/docker/cp-test_ha-556988-m03_ha-556988-m04.txt                                         │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ cp      │ ha-556988 cp testdata/cp-test.txt ha-556988-m04:/home/docker/cp-test.txt                                                             │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ cp      │ ha-556988 cp ha-556988-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2002313520/001/cp-test_ha-556988-m04.txt │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ cp      │ ha-556988 cp ha-556988-m04:/home/docker/cp-test.txt ha-556988:/home/docker/cp-test_ha-556988-m04_ha-556988.txt                       │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988 sudo cat /home/docker/cp-test_ha-556988-m04_ha-556988.txt                                                 │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ cp      │ ha-556988 cp ha-556988-m04:/home/docker/cp-test.txt ha-556988-m02:/home/docker/cp-test_ha-556988-m04_ha-556988-m02.txt               │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m02 sudo cat /home/docker/cp-test_ha-556988-m04_ha-556988-m02.txt                                         │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ cp      │ ha-556988 cp ha-556988-m04:/home/docker/cp-test.txt ha-556988-m03:/home/docker/cp-test_ha-556988-m04_ha-556988-m03.txt               │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m03 sudo cat /home/docker/cp-test_ha-556988-m04_ha-556988-m03.txt                                         │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ node    │ ha-556988 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ node    │ ha-556988 node start m02 --alsologtostderr -v 5                                                                                      │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:58 UTC │
	│ node    │ ha-556988 node list --alsologtostderr -v 5                                                                                           │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:58 UTC │                     │
	│ stop    │ ha-556988 stop --alsologtostderr -v 5                                                                                                │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:58 UTC │ 16 Oct 25 18:58 UTC │
	│ start   │ ha-556988 start --wait true --alsologtostderr -v 5                                                                                   │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:58 UTC │                     │
	│ node    │ ha-556988 node list --alsologtostderr -v 5                                                                                           │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 19:07 UTC │                     │
	│ node    │ ha-556988 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 19:07 UTC │ 16 Oct 25 19:07 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:58:51
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:58:51.718625  337340 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:58:51.718820  337340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:58:51.718832  337340 out.go:374] Setting ErrFile to fd 2...
	I1016 18:58:51.718837  337340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:58:51.719085  337340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:58:51.719452  337340 out.go:368] Setting JSON to false
	I1016 18:58:51.720287  337340 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6061,"bootTime":1760635071,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 18:58:51.720360  337340 start.go:141] virtualization:  
	I1016 18:58:51.723622  337340 out.go:179] * [ha-556988] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 18:58:51.727453  337340 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:58:51.727561  337340 notify.go:220] Checking for updates...
	I1016 18:58:51.733207  337340 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:58:51.736137  337340 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:58:51.738974  337340 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 18:58:51.741951  337340 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 18:58:51.744907  337340 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:58:51.748268  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:58:51.748399  337340 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:58:51.772958  337340 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 18:58:51.773087  337340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:58:51.833709  337340 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-16 18:58:51.824777239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:58:51.833825  337340 docker.go:318] overlay module found
	I1016 18:58:51.836939  337340 out.go:179] * Using the docker driver based on existing profile
	I1016 18:58:51.839798  337340 start.go:305] selected driver: docker
	I1016 18:58:51.839818  337340 start.go:925] validating driver "docker" against &{Name:ha-556988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:58:51.839961  337340 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:58:51.840070  337340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:58:51.894329  337340 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-16 18:58:51.884487993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:58:51.894716  337340 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:58:51.894754  337340 cni.go:84] Creating CNI manager for ""
	I1016 18:58:51.894821  337340 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1016 18:58:51.894871  337340 start.go:349] cluster config:
	{Name:ha-556988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:58:51.898184  337340 out.go:179] * Starting "ha-556988" primary control-plane node in "ha-556988" cluster
	I1016 18:58:51.901075  337340 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:58:51.904106  337340 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:58:51.906904  337340 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:58:51.906960  337340 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 18:58:51.906971  337340 cache.go:58] Caching tarball of preloaded images
	I1016 18:58:51.906995  337340 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:58:51.907065  337340 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 18:58:51.907074  337340 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:58:51.907213  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:58:51.927032  337340 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:58:51.927054  337340 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:58:51.927071  337340 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:58:51.927094  337340 start.go:360] acquireMachinesLock for ha-556988: {Name:mk71c3a6201989099f6bf114603feb8455c41f5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:58:51.927153  337340 start.go:364] duration metric: took 41.945µs to acquireMachinesLock for "ha-556988"
	I1016 18:58:51.927187  337340 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:58:51.927198  337340 fix.go:54] fixHost starting: 
	I1016 18:58:51.927452  337340 cli_runner.go:164] Run: docker container inspect ha-556988 --format={{.State.Status}}
	I1016 18:58:51.944496  337340 fix.go:112] recreateIfNeeded on ha-556988: state=Stopped err=<nil>
	W1016 18:58:51.944531  337340 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:58:51.947809  337340 out.go:252] * Restarting existing docker container for "ha-556988" ...
	I1016 18:58:51.947886  337340 cli_runner.go:164] Run: docker start ha-556988
	I1016 18:58:52.211064  337340 cli_runner.go:164] Run: docker container inspect ha-556988 --format={{.State.Status}}
	I1016 18:58:52.238130  337340 kic.go:430] container "ha-556988" state is running.
	I1016 18:58:52.238496  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988
	I1016 18:58:52.265254  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:58:52.265525  337340 machine.go:93] provisionDockerMachine start ...
	I1016 18:58:52.265595  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:52.289105  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:58:52.289561  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1016 18:58:52.289576  337340 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:58:52.290191  337340 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 18:58:55.440597  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988
	
	I1016 18:58:55.440631  337340 ubuntu.go:182] provisioning hostname "ha-556988"
	I1016 18:58:55.440701  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:55.458200  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:58:55.458510  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1016 18:58:55.458528  337340 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-556988 && echo "ha-556988" | sudo tee /etc/hostname
	I1016 18:58:55.615084  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988
	
	I1016 18:58:55.615165  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:55.633608  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:58:55.633925  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1016 18:58:55.633950  337340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-556988' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-556988/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-556988' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:58:55.781429  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:58:55.781454  337340 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 18:58:55.781481  337340 ubuntu.go:190] setting up certificates
	I1016 18:58:55.781490  337340 provision.go:84] configureAuth start
	I1016 18:58:55.781555  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988
	I1016 18:58:55.798617  337340 provision.go:143] copyHostCerts
	I1016 18:58:55.798664  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:58:55.798709  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 18:58:55.798730  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:58:55.798812  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 18:58:55.798915  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:58:55.798938  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 18:58:55.798949  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:58:55.798989  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 18:58:55.799046  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:58:55.799068  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 18:58:55.799078  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:58:55.799112  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 18:58:55.799198  337340 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.ha-556988 san=[127.0.0.1 192.168.49.2 ha-556988 localhost minikube]
	I1016 18:58:56.377628  337340 provision.go:177] copyRemoteCerts
	I1016 18:58:56.377703  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:58:56.377743  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:56.397097  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:58:56.500593  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1016 18:58:56.500663  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:58:56.518370  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1016 18:58:56.518433  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 18:58:56.536547  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1016 18:58:56.536628  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1016 18:58:56.555074  337340 provision.go:87] duration metric: took 773.569729ms to configureAuth
	I1016 18:58:56.555099  337340 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:58:56.555326  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:58:56.555445  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:56.572643  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:58:56.572965  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1016 18:58:56.572986  337340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:58:56.890339  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:58:56.890428  337340 machine.go:96] duration metric: took 4.624892872s to provisionDockerMachine
	I1016 18:58:56.890454  337340 start.go:293] postStartSetup for "ha-556988" (driver="docker")
	I1016 18:58:56.890480  337340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:58:56.890607  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:58:56.890683  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:56.913382  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:58:57.017075  337340 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:58:57.021857  337340 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:58:57.021887  337340 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:58:57.021899  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 18:58:57.021965  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 18:58:57.022045  337340 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 18:58:57.022052  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /etc/ssl/certs/2903122.pem
	I1016 18:58:57.022160  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:58:57.030852  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:58:57.048968  337340 start.go:296] duration metric: took 158.482858ms for postStartSetup
	I1016 18:58:57.049157  337340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:58:57.049222  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:57.066845  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:58:57.166118  337340 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:58:57.170752  337340 fix.go:56] duration metric: took 5.243547354s for fixHost
	I1016 18:58:57.170779  337340 start.go:83] releasing machines lock for "ha-556988", held for 5.243610027s
	I1016 18:58:57.170862  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988
	I1016 18:58:57.187672  337340 ssh_runner.go:195] Run: cat /version.json
	I1016 18:58:57.187699  337340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:58:57.187723  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:57.187757  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:57.206208  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:58:57.213346  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:58:57.391366  337340 ssh_runner.go:195] Run: systemctl --version
	I1016 18:58:57.397910  337340 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:58:57.434230  337340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:58:57.439686  337340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:58:57.439757  337340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:58:57.447828  337340 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:58:57.447851  337340 start.go:495] detecting cgroup driver to use...
	I1016 18:58:57.447886  337340 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 18:58:57.447952  337340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:58:57.463944  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:58:57.477406  337340 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:58:57.477468  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:58:57.493693  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:58:57.507255  337340 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:58:57.614114  337340 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:58:57.729976  337340 docker.go:234] disabling docker service ...
	I1016 18:58:57.730050  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:58:57.745940  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:58:57.758869  337340 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:58:57.875693  337340 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:58:57.984271  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:58:57.997324  337340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:58:58.012287  337340 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:58:58.012387  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.023645  337340 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 18:58:58.023740  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.036244  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.046489  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.055569  337340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:58:58.065264  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.075123  337340 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.084654  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.094603  337340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:58:58.102554  337340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:58:58.110013  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:58:58.218071  337340 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:58:58.347916  337340 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:58:58.348026  337340 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:58:58.351852  337340 start.go:563] Will wait 60s for crictl version
	I1016 18:58:58.351953  337340 ssh_runner.go:195] Run: which crictl
	I1016 18:58:58.355554  337340 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:58:58.382893  337340 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:58:58.383032  337340 ssh_runner.go:195] Run: crio --version
	I1016 18:58:58.410837  337340 ssh_runner.go:195] Run: crio --version
	I1016 18:58:58.446345  337340 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:58:58.449238  337340 cli_runner.go:164] Run: docker network inspect ha-556988 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:58:58.465498  337340 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1016 18:58:58.469406  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:58:58.479415  337340 kubeadm.go:883] updating cluster {Name:ha-556988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:58:58.479566  337340 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:58:58.479620  337340 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:58:58.516159  337340 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:58:58.516181  337340 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:58:58.516239  337340 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:58:58.543999  337340 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:58:58.544030  337340 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:58:58.544040  337340 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1016 18:58:58.544140  337340 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-556988 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:58:58.544225  337340 ssh_runner.go:195] Run: crio config
	I1016 18:58:58.618937  337340 cni.go:84] Creating CNI manager for ""
	I1016 18:58:58.618957  337340 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1016 18:58:58.618981  337340 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:58:58.619008  337340 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-556988 NodeName:ha-556988 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:58:58.619133  337340 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-556988"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:58:58.619160  337340 kube-vip.go:115] generating kube-vip config ...
	I1016 18:58:58.619222  337340 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1016 18:58:58.631579  337340 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:58:58.631697  337340 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1016 18:58:58.631769  337340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:58:58.640083  337340 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:58:58.640188  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1016 18:58:58.648089  337340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1016 18:58:58.661375  337340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:58:58.674583  337340 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1016 18:58:58.687345  337340 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1016 18:58:58.700772  337340 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1016 18:58:58.704503  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:58:58.714276  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:58:58.833486  337340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:58:58.851263  337340 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988 for IP: 192.168.49.2
	I1016 18:58:58.851288  337340 certs.go:195] generating shared ca certs ...
	I1016 18:58:58.851306  337340 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:58:58.851471  337340 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 18:58:58.851524  337340 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 18:58:58.851537  337340 certs.go:257] generating profile certs ...
	I1016 18:58:58.851633  337340 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key
	I1016 18:58:58.851666  337340 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.1de6c797
	I1016 18:58:58.851690  337340 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt.1de6c797 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1016 18:58:59.152876  337340 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt.1de6c797 ...
	I1016 18:58:59.152960  337340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt.1de6c797: {Name:mk3d22e55d5c37c04716dc4d1ee3cbc4538fbdc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:58:59.153223  337340 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.1de6c797 ...
	I1016 18:58:59.153265  337340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.1de6c797: {Name:mkda3eb1676258b3c7a46448934b59023d353a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:58:59.153432  337340 certs.go:382] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt.1de6c797 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt
	I1016 18:58:59.153636  337340 certs.go:386] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.1de6c797 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key
	I1016 18:58:59.153853  337340 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key
	I1016 18:58:59.153891  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1016 18:58:59.153923  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1016 18:58:59.153965  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1016 18:58:59.153998  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1016 18:58:59.154028  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1016 18:58:59.154076  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1016 18:58:59.154112  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1016 18:58:59.154143  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1016 18:58:59.154239  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 18:58:59.154300  337340 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 18:58:59.154325  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 18:58:59.154381  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 18:58:59.154435  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:58:59.154491  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 18:58:59.154609  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:58:59.154690  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:58:59.154737  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem -> /usr/share/ca-certificates/290312.pem
	I1016 18:58:59.154771  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /usr/share/ca-certificates/2903122.pem
	I1016 18:58:59.155500  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:58:59.174654  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 18:58:59.194053  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:58:59.220036  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 18:58:59.241089  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1016 18:58:59.259308  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 18:58:59.276555  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:58:59.293855  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:58:59.311467  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:58:59.329708  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 18:58:59.347304  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 18:58:59.364602  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:58:59.377635  337340 ssh_runner.go:195] Run: openssl version
	I1016 18:58:59.384255  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 18:58:59.393733  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 18:58:59.397737  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 18:58:59.397824  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 18:58:59.438696  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:58:59.446893  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:58:59.455572  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:58:59.459600  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:58:59.459668  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:58:59.500823  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:58:59.509003  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 18:58:59.520724  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 18:58:59.528394  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 18:58:59.528467  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 18:58:59.578056  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 18:58:59.586838  337340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:58:59.594144  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:58:59.638647  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:58:59.694080  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:58:59.765575  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:58:59.865472  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:58:59.931581  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:58:59.986682  337340 kubeadm.go:400] StartCluster: {Name:ha-556988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:58:59.986889  337340 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:58:59.986987  337340 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:59:00.020883  337340 cri.go:89] found id: "a6a97464c4b58734820a4c747fbaa58980bfcb3cdc5b94d0a49804bd9ecaf2d2"
	I1016 18:59:00.020964  337340 cri.go:89] found id: "37de0677d02917c07b70727749f73f2b0b33bfa000e9e137a54da309d14e7ae7"
	I1016 18:59:00.020984  337340 cri.go:89] found id: "13005c03c7e831233e329dc3df5f63331cf23a4ab71c78d67d200baaff30b9bf"
	I1016 18:59:00.021007  337340 cri.go:89] found id: "ccd1663977e230bbda3cae69e035a19bb725c3f88efd4340e2acdb82e35b17b4"
	I1016 18:59:00.021041  337340 cri.go:89] found id: "0947527fb7c6600575f80d864636e177c1330efa7ab3caff116116cd0d07fe91"
	I1016 18:59:00.021071  337340 cri.go:89] found id: ""
	I1016 18:59:00.021222  337340 ssh_runner.go:195] Run: sudo runc list -f json
	W1016 18:59:00.048970  337340 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:59:00Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:59:00.049191  337340 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:59:00.064913  337340 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 18:59:00.065020  337340 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 18:59:00.065128  337340 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 18:59:00.081513  337340 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:59:00.082142  337340 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-556988" does not appear in /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:59:00.082376  337340 kubeconfig.go:62] /home/jenkins/minikube-integration/21738-288457/kubeconfig needs updating (will repair): [kubeconfig missing "ha-556988" cluster setting kubeconfig missing "ha-556988" context setting]
	I1016 18:59:00.082852  337340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:59:00.083778  337340 kapi.go:59] client config for ha-556988: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key", CAFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1016 18:59:00.084642  337340 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1016 18:59:00.084775  337340 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1016 18:59:00.084800  337340 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1016 18:59:00.084835  337340 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1016 18:59:00.084861  337340 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1016 18:59:00.084885  337340 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1016 18:59:00.085481  337340 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 18:59:00.133777  337340 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1016 18:59:00.133865  337340 kubeadm.go:601] duration metric: took 68.819342ms to restartPrimaryControlPlane
	I1016 18:59:00.133892  337340 kubeadm.go:402] duration metric: took 147.219085ms to StartCluster
	I1016 18:59:00.133962  337340 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:59:00.134087  337340 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:59:00.134991  337340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:59:00.135381  337340 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:59:00.135451  337340 start.go:241] waiting for startup goroutines ...
	I1016 18:59:00.135503  337340 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:59:00.136478  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:00.165207  337340 out.go:179] * Enabled addons: 
	I1016 18:59:00.168421  337340 addons.go:514] duration metric: took 32.907014ms for enable addons: enabled=[]
	I1016 18:59:00.168517  337340 start.go:246] waiting for cluster config update ...
	I1016 18:59:00.168542  337340 start.go:255] writing updated cluster config ...
	I1016 18:59:00.191362  337340 out.go:203] 
	I1016 18:59:00.209821  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:00.209961  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:00.213495  337340 out.go:179] * Starting "ha-556988-m02" control-plane node in "ha-556988" cluster
	I1016 18:59:00.216452  337340 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:59:00.223747  337340 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:59:00.226672  337340 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:59:00.226714  337340 cache.go:58] Caching tarball of preloaded images
	I1016 18:59:00.226842  337340 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 18:59:00.226852  337340 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:59:00.227106  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:00.227394  337340 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:59:00.266622  337340 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:59:00.266645  337340 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:59:00.266659  337340 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:59:00.266685  337340 start.go:360] acquireMachinesLock for ha-556988-m02: {Name:mkb742ea24d411e97f6bd75961598d91ba358bd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:59:00.266743  337340 start.go:364] duration metric: took 41.445µs to acquireMachinesLock for "ha-556988-m02"
	I1016 18:59:00.266766  337340 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:59:00.266772  337340 fix.go:54] fixHost starting: m02
	I1016 18:59:00.267061  337340 cli_runner.go:164] Run: docker container inspect ha-556988-m02 --format={{.State.Status}}
	I1016 18:59:00.297319  337340 fix.go:112] recreateIfNeeded on ha-556988-m02: state=Stopped err=<nil>
	W1016 18:59:00.297360  337340 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:59:00.300819  337340 out.go:252] * Restarting existing docker container for "ha-556988-m02" ...
	I1016 18:59:00.300940  337340 cli_runner.go:164] Run: docker start ha-556988-m02
	I1016 18:59:00.708144  337340 cli_runner.go:164] Run: docker container inspect ha-556988-m02 --format={{.State.Status}}
	I1016 18:59:00.733543  337340 kic.go:430] container "ha-556988-m02" state is running.
	I1016 18:59:00.733902  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m02
	I1016 18:59:00.760804  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:00.761309  337340 machine.go:93] provisionDockerMachine start ...
	I1016 18:59:00.761403  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:00.808146  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:00.808685  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1016 18:59:00.808701  337340 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:59:00.809303  337340 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40522->127.0.0.1:33183: read: connection reset by peer
	I1016 18:59:04.034070  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988-m02
	
	I1016 18:59:04.034139  337340 ubuntu.go:182] provisioning hostname "ha-556988-m02"
	I1016 18:59:04.034243  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:04.063655  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:04.063975  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1016 18:59:04.063993  337340 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-556988-m02 && echo "ha-556988-m02" | sudo tee /etc/hostname
	I1016 18:59:04.267030  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988-m02
	
	I1016 18:59:04.267113  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:04.300780  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:04.301103  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1016 18:59:04.301127  337340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-556988-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-556988-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-556988-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:59:04.469711  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:59:04.469796  337340 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 18:59:04.469828  337340 ubuntu.go:190] setting up certificates
	I1016 18:59:04.469864  337340 provision.go:84] configureAuth start
	I1016 18:59:04.469974  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m02
	I1016 18:59:04.508993  337340 provision.go:143] copyHostCerts
	I1016 18:59:04.509035  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:59:04.509067  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 18:59:04.509074  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:59:04.509305  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 18:59:04.509422  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:59:04.509441  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 18:59:04.509446  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:59:04.509496  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 18:59:04.509545  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:59:04.509562  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 18:59:04.509566  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:59:04.509591  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 18:59:04.509649  337340 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.ha-556988-m02 san=[127.0.0.1 192.168.49.3 ha-556988-m02 localhost minikube]
	I1016 18:59:05.303068  337340 provision.go:177] copyRemoteCerts
	I1016 18:59:05.303142  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:59:05.303195  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:05.322174  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 18:59:05.428054  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1016 18:59:05.428132  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:59:05.461825  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1016 18:59:05.461888  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 18:59:05.487317  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1016 18:59:05.487378  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1016 18:59:05.516798  337340 provision.go:87] duration metric: took 1.046901762s to configureAuth
	I1016 18:59:05.516822  337340 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:59:05.517061  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:05.517252  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:05.546833  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:05.547150  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1016 18:59:05.547168  337340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:59:05.937754  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:59:05.937782  337340 machine.go:96] duration metric: took 5.176458229s to provisionDockerMachine
	I1016 18:59:05.937802  337340 start.go:293] postStartSetup for "ha-556988-m02" (driver="docker")
	I1016 18:59:05.937814  337340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:59:05.937890  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:59:05.937937  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:05.955324  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 18:59:06.057291  337340 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:59:06.060623  337340 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:59:06.060656  337340 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:59:06.060668  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 18:59:06.060728  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 18:59:06.060812  337340 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 18:59:06.060824  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /etc/ssl/certs/2903122.pem
	I1016 18:59:06.060930  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:59:06.068899  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:59:06.087392  337340 start.go:296] duration metric: took 149.572621ms for postStartSetup
	I1016 18:59:06.087476  337340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:59:06.087533  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:06.109477  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 18:59:06.222886  337340 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:59:06.229852  337340 fix.go:56] duration metric: took 5.963072953s for fixHost
	I1016 18:59:06.229883  337340 start.go:83] releasing machines lock for "ha-556988-m02", held for 5.963130679s
	I1016 18:59:06.229963  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m02
	I1016 18:59:06.266689  337340 out.go:179] * Found network options:
	I1016 18:59:06.273332  337340 out.go:179]   - NO_PROXY=192.168.49.2
	W1016 18:59:06.276561  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	W1016 18:59:06.276606  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	I1016 18:59:06.276683  337340 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:59:06.276749  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:06.276754  337340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:59:06.276816  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:06.317825  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 18:59:06.323025  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 18:59:06.671873  337340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:59:06.677594  337340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:59:06.677732  337340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:59:06.690261  337340 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:59:06.690335  337340 start.go:495] detecting cgroup driver to use...
	I1016 18:59:06.690384  337340 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 18:59:06.690471  337340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:59:06.714650  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:59:06.733867  337340 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:59:06.733929  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:59:06.752522  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:59:06.775910  337340 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:59:06.992043  337340 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:59:07.227541  337340 docker.go:234] disabling docker service ...
	I1016 18:59:07.227607  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:59:07.250512  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:59:07.276078  337340 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:59:07.484122  337340 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:59:07.729089  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:59:07.767438  337340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:59:07.809637  337340 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:59:07.809753  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.832720  337340 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 18:59:07.832842  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.859881  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.889284  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.901694  337340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:59:07.922354  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.941649  337340 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.951572  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.961513  337340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:59:07.970666  337340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:59:07.978742  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:59:08.323908  337340 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:59:09.667321  337340 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.343330778s)
	I1016 18:59:09.667346  337340 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:59:09.667400  337340 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:59:09.677469  337340 start.go:563] Will wait 60s for crictl version
	I1016 18:59:09.677549  337340 ssh_runner.go:195] Run: which crictl
	I1016 18:59:09.683697  337340 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:59:09.731470  337340 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:59:09.731621  337340 ssh_runner.go:195] Run: crio --version
	I1016 18:59:09.782976  337340 ssh_runner.go:195] Run: crio --version
	I1016 18:59:09.844144  337340 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:59:09.847254  337340 out.go:179]   - env NO_PROXY=192.168.49.2
	I1016 18:59:09.850158  337340 cli_runner.go:164] Run: docker network inspect ha-556988 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:59:09.881787  337340 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1016 18:59:09.886123  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:59:09.903709  337340 mustload.go:65] Loading cluster: ha-556988
	I1016 18:59:09.903953  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:09.904211  337340 cli_runner.go:164] Run: docker container inspect ha-556988 --format={{.State.Status}}
	I1016 18:59:09.944289  337340 host.go:66] Checking if "ha-556988" exists ...
	I1016 18:59:09.944603  337340 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988 for IP: 192.168.49.3
	I1016 18:59:09.944620  337340 certs.go:195] generating shared ca certs ...
	I1016 18:59:09.944638  337340 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:59:09.944779  337340 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 18:59:09.944832  337340 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 18:59:09.944844  337340 certs.go:257] generating profile certs ...
	I1016 18:59:09.944939  337340 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key
	I1016 18:59:09.945027  337340 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.2ae973c7
	I1016 18:59:09.945079  337340 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key
	I1016 18:59:09.945092  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1016 18:59:09.945106  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1016 18:59:09.945127  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1016 18:59:09.945166  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1016 18:59:09.945182  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1016 18:59:09.945202  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1016 18:59:09.945213  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1016 18:59:09.945233  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1016 18:59:09.945291  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 18:59:09.945327  337340 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 18:59:09.945341  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 18:59:09.945370  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 18:59:09.945403  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:59:09.945429  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 18:59:09.945482  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:59:09.945516  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:59:09.945534  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem -> /usr/share/ca-certificates/290312.pem
	I1016 18:59:09.945549  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /usr/share/ca-certificates/2903122.pem
	I1016 18:59:09.945612  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:59:09.972941  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:59:10.097521  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1016 18:59:10.102513  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1016 18:59:10.114147  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1016 18:59:10.119117  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1016 18:59:10.130126  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1016 18:59:10.134419  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1016 18:59:10.144627  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1016 18:59:10.148520  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1016 18:59:10.157921  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1016 18:59:10.161674  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1016 18:59:10.171535  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1016 18:59:10.175229  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1016 18:59:10.184604  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:59:10.206415  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 18:59:10.228102  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:59:10.258566  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 18:59:10.283952  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1016 18:59:10.306580  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 18:59:10.329415  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:59:10.348969  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:59:10.368321  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:59:10.387180  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 18:59:10.408929  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 18:59:10.429114  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1016 18:59:10.444245  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1016 18:59:10.458197  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1016 18:59:10.472176  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1016 18:59:10.485882  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1016 18:59:10.499848  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1016 18:59:10.515126  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1016 18:59:10.528667  337340 ssh_runner.go:195] Run: openssl version
	I1016 18:59:10.535446  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:59:10.544186  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:59:10.548237  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:59:10.548342  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:59:10.591605  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:59:10.600300  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 18:59:10.608985  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 18:59:10.612817  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 18:59:10.612923  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 18:59:10.655658  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 18:59:10.664193  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 18:59:10.673263  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 18:59:10.677209  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 18:59:10.677288  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 18:59:10.718855  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:59:10.726829  337340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:59:10.730876  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:59:10.773328  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:59:10.815232  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:59:10.858016  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:59:10.899603  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:59:10.942507  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:59:10.988343  337340 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1016 18:59:10.988480  337340 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-556988-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:59:10.988535  337340 kube-vip.go:115] generating kube-vip config ...
	I1016 18:59:10.988601  337340 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1016 18:59:11.002298  337340 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:59:11.002415  337340 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1016 18:59:11.002494  337340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:59:11.011536  337340 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:59:11.011651  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1016 18:59:11.021905  337340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1016 18:59:11.037889  337340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:59:11.051536  337340 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1016 18:59:11.069953  337340 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1016 18:59:11.074152  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:59:11.086164  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:59:11.252847  337340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:59:11.266706  337340 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:59:11.267048  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:11.273634  337340 out.go:179] * Verifying Kubernetes components...
	I1016 18:59:11.276480  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:59:11.421023  337340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:59:11.436654  337340 kapi.go:59] client config for ha-556988: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key", CAFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1016 18:59:11.436746  337340 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1016 18:59:11.437099  337340 node_ready.go:35] waiting up to 6m0s for node "ha-556988-m02" to be "Ready" ...
	I1016 18:59:34.862749  337340 node_ready.go:49] node "ha-556988-m02" is "Ready"
	I1016 18:59:34.862783  337340 node_ready.go:38] duration metric: took 23.425601966s for node "ha-556988-m02" to be "Ready" ...
	I1016 18:59:34.862797  337340 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:59:34.862859  337340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:59:34.885329  337340 api_server.go:72] duration metric: took 23.618240686s to wait for apiserver process to appear ...
	I1016 18:59:34.885358  337340 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:59:34.885377  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:34.897604  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:34.897640  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:35.386323  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:35.400088  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:35.400123  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:35.885493  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:35.987319  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:35.987359  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:36.385456  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:36.412352  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:36.412390  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:36.885906  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:36.906763  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:36.906805  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:37.386256  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:37.404132  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:37.404163  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:37.885488  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:37.894320  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:37.894358  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:38.385493  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:38.394925  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1016 18:59:38.395973  337340 api_server.go:141] control plane version: v1.34.1
	I1016 18:59:38.396011  337340 api_server.go:131] duration metric: took 3.51063495s to wait for apiserver health ...
	I1016 18:59:38.396021  337340 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:59:38.401864  337340 system_pods.go:59] 26 kube-system pods found
	I1016 18:59:38.401911  337340 system_pods.go:61] "coredns-66bc5c9577-bg5gf" [e74de9d2-b737-42ff-8b64-feac035b2a70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:59:38.401923  337340 system_pods.go:61] "coredns-66bc5c9577-qnwbz" [774c649b-c0e4-4cdb-b2e8-cf72f5904899] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:59:38.401929  337340 system_pods.go:61] "etcd-ha-556988" [3e9c14ad-eae5-477f-b7c0-9dcdaf895b65] Running
	I1016 18:59:38.401935  337340 system_pods.go:61] "etcd-ha-556988-m02" [3f391bcc-813d-4db1-9aaa-258f230517fc] Running
	I1016 18:59:38.401940  337340 system_pods.go:61] "etcd-ha-556988-m03" [ea908ff8-f137-460f-9bf4-17345b1c9a66] Running
	I1016 18:59:38.401952  337340 system_pods.go:61] "kindnet-9mrmf" [45836450-4eac-49b9-a0cf-8d5a07061558] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1016 18:59:38.401957  337340 system_pods.go:61] "kindnet-c5vhh" [aadf11dc-a51d-4828-9ae1-0295e92d1c95] Running
	I1016 18:59:38.401968  337340 system_pods.go:61] "kindnet-flq9x" [aea5627f-11fc-4f3a-a968-1ca5c98d36b5] Running
	I1016 18:59:38.401972  337340 system_pods.go:61] "kindnet-qj4cl" [ef19450a-7ec3-4ccf-a5e9-c7937fd3339d] Running
	I1016 18:59:38.401979  337340 system_pods.go:61] "kube-apiserver-ha-556988" [24a555d8-f3f0-4b1c-b576-6ca1aff25a54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:59:38.401988  337340 system_pods.go:61] "kube-apiserver-ha-556988-m02" [1fc44835-ea0a-40c3-8042-f1b7e4c5c317] Running
	I1016 18:59:38.401994  337340 system_pods.go:61] "kube-apiserver-ha-556988-m03" [4c29b8ab-29b7-4dbb-8c29-18837ac4113e] Running
	I1016 18:59:38.402001  337340 system_pods.go:61] "kube-controller-manager-ha-556988" [cc4765f2-5a4b-44ce-b5da-77313d0027c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:59:38.402018  337340 system_pods.go:61] "kube-controller-manager-ha-556988-m02" [5a169a8b-1028-4629-a4b9-9cad3c765757] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:59:38.402024  337340 system_pods.go:61] "kube-controller-manager-ha-556988-m03" [ec16f7f4-acee-4d97-8cf3-20c0f326b08b] Running
	I1016 18:59:38.402030  337340 system_pods.go:61] "kube-proxy-2j2kg" [26525910-8639-4ca0-a113-d428683bd112] Running
	I1016 18:59:38.402037  337340 system_pods.go:61] "kube-proxy-dqhtm" [eee1ee0e-f145-4298-afe6-1ca41a084680] Running
	I1016 18:59:38.402041  337340 system_pods.go:61] "kube-proxy-l2lf6" [b32400f6-5ec6-4a22-87fc-4b9fb8b25976] Running
	I1016 18:59:38.402049  337340 system_pods.go:61] "kube-proxy-mx9hc" [64ee00b3-06f0-4db8-91a2-cb2bb4b25b64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1016 18:59:38.402060  337340 system_pods.go:61] "kube-scheduler-ha-556988" [37cb1ddb-9782-4e54-9793-8f2a07fe78e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:59:38.402068  337340 system_pods.go:61] "kube-scheduler-ha-556988-m02" [d819d0c4-766f-44c5-8bb9-b8f35e3d8d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:59:38.402073  337340 system_pods.go:61] "kube-scheduler-ha-556988-m03" [33286dd3-5abd-484d-abbb-8cb29c08d3ee] Running
	I1016 18:59:38.402077  337340 system_pods.go:61] "kube-vip-ha-556988" [0c7ea0da-ea3e-4fff-a76c-98b473255af9] Running
	I1016 18:59:38.402081  337340 system_pods.go:61] "kube-vip-ha-556988-m02" [850d312a-8987-4b0f-bb9e-a393a24d9b49] Running
	I1016 18:59:38.402085  337340 system_pods.go:61] "kube-vip-ha-556988-m03" [85c7549d-c836-473b-916a-e4091d8daaa4] Running
	I1016 18:59:38.402089  337340 system_pods.go:61] "storage-provisioner" [916b69a5-8ee0-43ee-87fd-9a88caebbec8] Running
	I1016 18:59:38.402095  337340 system_pods.go:74] duration metric: took 6.067311ms to wait for pod list to return data ...
	I1016 18:59:38.402109  337340 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:59:38.406892  337340 default_sa.go:45] found service account: "default"
	I1016 18:59:38.406919  337340 default_sa.go:55] duration metric: took 4.803341ms for default service account to be created ...
	I1016 18:59:38.406930  337340 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 18:59:38.413271  337340 system_pods.go:86] 26 kube-system pods found
	I1016 18:59:38.413316  337340 system_pods.go:89] "coredns-66bc5c9577-bg5gf" [e74de9d2-b737-42ff-8b64-feac035b2a70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:59:38.413326  337340 system_pods.go:89] "coredns-66bc5c9577-qnwbz" [774c649b-c0e4-4cdb-b2e8-cf72f5904899] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:59:38.413332  337340 system_pods.go:89] "etcd-ha-556988" [3e9c14ad-eae5-477f-b7c0-9dcdaf895b65] Running
	I1016 18:59:38.413337  337340 system_pods.go:89] "etcd-ha-556988-m02" [3f391bcc-813d-4db1-9aaa-258f230517fc] Running
	I1016 18:59:38.413343  337340 system_pods.go:89] "etcd-ha-556988-m03" [ea908ff8-f137-460f-9bf4-17345b1c9a66] Running
	I1016 18:59:38.413350  337340 system_pods.go:89] "kindnet-9mrmf" [45836450-4eac-49b9-a0cf-8d5a07061558] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1016 18:59:38.413355  337340 system_pods.go:89] "kindnet-c5vhh" [aadf11dc-a51d-4828-9ae1-0295e92d1c95] Running
	I1016 18:59:38.413367  337340 system_pods.go:89] "kindnet-flq9x" [aea5627f-11fc-4f3a-a968-1ca5c98d36b5] Running
	I1016 18:59:38.413379  337340 system_pods.go:89] "kindnet-qj4cl" [ef19450a-7ec3-4ccf-a5e9-c7937fd3339d] Running
	I1016 18:59:38.413390  337340 system_pods.go:89] "kube-apiserver-ha-556988" [24a555d8-f3f0-4b1c-b576-6ca1aff25a54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:59:38.413396  337340 system_pods.go:89] "kube-apiserver-ha-556988-m02" [1fc44835-ea0a-40c3-8042-f1b7e4c5c317] Running
	I1016 18:59:38.413406  337340 system_pods.go:89] "kube-apiserver-ha-556988-m03" [4c29b8ab-29b7-4dbb-8c29-18837ac4113e] Running
	I1016 18:59:38.413413  337340 system_pods.go:89] "kube-controller-manager-ha-556988" [cc4765f2-5a4b-44ce-b5da-77313d0027c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:59:38.413425  337340 system_pods.go:89] "kube-controller-manager-ha-556988-m02" [5a169a8b-1028-4629-a4b9-9cad3c765757] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:59:38.413430  337340 system_pods.go:89] "kube-controller-manager-ha-556988-m03" [ec16f7f4-acee-4d97-8cf3-20c0f326b08b] Running
	I1016 18:59:38.413435  337340 system_pods.go:89] "kube-proxy-2j2kg" [26525910-8639-4ca0-a113-d428683bd112] Running
	I1016 18:59:38.413440  337340 system_pods.go:89] "kube-proxy-dqhtm" [eee1ee0e-f145-4298-afe6-1ca41a084680] Running
	I1016 18:59:38.413444  337340 system_pods.go:89] "kube-proxy-l2lf6" [b32400f6-5ec6-4a22-87fc-4b9fb8b25976] Running
	I1016 18:59:38.413456  337340 system_pods.go:89] "kube-proxy-mx9hc" [64ee00b3-06f0-4db8-91a2-cb2bb4b25b64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1016 18:59:38.413467  337340 system_pods.go:89] "kube-scheduler-ha-556988" [37cb1ddb-9782-4e54-9793-8f2a07fe78e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:59:38.413474  337340 system_pods.go:89] "kube-scheduler-ha-556988-m02" [d819d0c4-766f-44c5-8bb9-b8f35e3d8d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:59:38.413486  337340 system_pods.go:89] "kube-scheduler-ha-556988-m03" [33286dd3-5abd-484d-abbb-8cb29c08d3ee] Running
	I1016 18:59:38.413491  337340 system_pods.go:89] "kube-vip-ha-556988" [0c7ea0da-ea3e-4fff-a76c-98b473255af9] Running
	I1016 18:59:38.413495  337340 system_pods.go:89] "kube-vip-ha-556988-m02" [850d312a-8987-4b0f-bb9e-a393a24d9b49] Running
	I1016 18:59:38.413498  337340 system_pods.go:89] "kube-vip-ha-556988-m03" [85c7549d-c836-473b-916a-e4091d8daaa4] Running
	I1016 18:59:38.413502  337340 system_pods.go:89] "storage-provisioner" [916b69a5-8ee0-43ee-87fd-9a88caebbec8] Running
	I1016 18:59:38.413515  337340 system_pods.go:126] duration metric: took 6.570484ms to wait for k8s-apps to be running ...
	I1016 18:59:38.413533  337340 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 18:59:38.413612  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:59:38.430123  337340 system_svc.go:56] duration metric: took 16.57935ms WaitForService to wait for kubelet
	I1016 18:59:38.430164  337340 kubeadm.go:586] duration metric: took 27.163079108s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:59:38.430184  337340 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:59:38.453899  337340 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 18:59:38.453938  337340 node_conditions.go:123] node cpu capacity is 2
	I1016 18:59:38.453950  337340 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 18:59:38.453964  337340 node_conditions.go:123] node cpu capacity is 2
	I1016 18:59:38.453969  337340 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 18:59:38.453977  337340 node_conditions.go:123] node cpu capacity is 2
	I1016 18:59:38.453981  337340 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 18:59:38.453986  337340 node_conditions.go:123] node cpu capacity is 2
	I1016 18:59:38.453993  337340 node_conditions.go:105] duration metric: took 23.803362ms to run NodePressure ...
	I1016 18:59:38.454005  337340 start.go:241] waiting for startup goroutines ...
	I1016 18:59:38.454041  337340 start.go:255] writing updated cluster config ...
	I1016 18:59:38.457719  337340 out.go:203] 
	I1016 18:59:38.460987  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:38.461187  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:38.464790  337340 out.go:179] * Starting "ha-556988-m03" control-plane node in "ha-556988" cluster
	I1016 18:59:38.468557  337340 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:59:38.471645  337340 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:59:38.474579  337340 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:59:38.474688  337340 cache.go:58] Caching tarball of preloaded images
	I1016 18:59:38.474647  337340 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:59:38.475030  337340 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 18:59:38.475073  337340 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:59:38.475235  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:38.500130  337340 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:59:38.500149  337340 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:59:38.500163  337340 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:59:38.500186  337340 start.go:360] acquireMachinesLock for ha-556988-m03: {Name:mk34d9a60e195460efb0e14fede3a8b24d8e28a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:59:38.500240  337340 start.go:364] duration metric: took 38.999µs to acquireMachinesLock for "ha-556988-m03"
	I1016 18:59:38.500259  337340 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:59:38.500264  337340 fix.go:54] fixHost starting: m03
	I1016 18:59:38.500516  337340 cli_runner.go:164] Run: docker container inspect ha-556988-m03 --format={{.State.Status}}
	I1016 18:59:38.520771  337340 fix.go:112] recreateIfNeeded on ha-556988-m03: state=Stopped err=<nil>
	W1016 18:59:38.520796  337340 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:59:38.523984  337340 out.go:252] * Restarting existing docker container for "ha-556988-m03" ...
	I1016 18:59:38.524069  337340 cli_runner.go:164] Run: docker start ha-556988-m03
	I1016 18:59:38.865706  337340 cli_runner.go:164] Run: docker container inspect ha-556988-m03 --format={{.State.Status}}
	I1016 18:59:38.891919  337340 kic.go:430] container "ha-556988-m03" state is running.
	I1016 18:59:38.895965  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m03
	I1016 18:59:38.924344  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:38.924714  337340 machine.go:93] provisionDockerMachine start ...
	I1016 18:59:38.924805  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:38.953535  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:38.953854  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1016 18:59:38.954163  337340 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:59:38.955105  337340 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 18:59:42.156520  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988-m03
	
	I1016 18:59:42.156559  337340 ubuntu.go:182] provisioning hostname "ha-556988-m03"
	I1016 18:59:42.156649  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:42.195862  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:42.196197  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1016 18:59:42.196217  337340 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-556988-m03 && echo "ha-556988-m03" | sudo tee /etc/hostname
	I1016 18:59:42.415761  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988-m03
	
	I1016 18:59:42.415927  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:42.448329  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:42.448631  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1016 18:59:42.448648  337340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-556988-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-556988-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-556988-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:59:42.655633  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:59:42.655699  337340 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 18:59:42.655755  337340 ubuntu.go:190] setting up certificates
	I1016 18:59:42.655798  337340 provision.go:84] configureAuth start
	I1016 18:59:42.655888  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m03
	I1016 18:59:42.682731  337340 provision.go:143] copyHostCerts
	I1016 18:59:42.682774  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:59:42.682809  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 18:59:42.682816  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:59:42.682894  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 18:59:42.683003  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:59:42.683029  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 18:59:42.683034  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:59:42.683063  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 18:59:42.683113  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:59:42.683134  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 18:59:42.683138  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:59:42.683162  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 18:59:42.683208  337340 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.ha-556988-m03 san=[127.0.0.1 192.168.49.4 ha-556988-m03 localhost minikube]
	I1016 18:59:42.986072  337340 provision.go:177] copyRemoteCerts
	I1016 18:59:42.986191  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:59:42.986266  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:43.009339  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:59:43.190424  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1016 18:59:43.190488  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 18:59:43.234240  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1016 18:59:43.234303  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1016 18:59:43.271524  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1016 18:59:43.271634  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1016 18:59:43.309031  337340 provision.go:87] duration metric: took 653.205044ms to configureAuth
	I1016 18:59:43.309101  337340 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:59:43.309396  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:43.309551  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:43.341419  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:43.341745  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1016 18:59:43.341761  337340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:59:43.818670  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:59:43.818698  337340 machine.go:96] duration metric: took 4.89396612s to provisionDockerMachine
	I1016 18:59:43.818717  337340 start.go:293] postStartSetup for "ha-556988-m03" (driver="docker")
	I1016 18:59:43.818729  337340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:59:43.818800  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:59:43.818847  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:43.843907  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:59:43.949206  337340 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:59:43.952687  337340 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:59:43.952714  337340 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:59:43.952725  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 18:59:43.952777  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 18:59:43.952858  337340 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 18:59:43.952870  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /etc/ssl/certs/2903122.pem
	I1016 18:59:43.952966  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:59:43.960926  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:59:43.978806  337340 start.go:296] duration metric: took 160.073239ms for postStartSetup
	I1016 18:59:43.978931  337340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:59:43.979022  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:43.996302  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:59:44.105727  337340 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:59:44.111903  337340 fix.go:56] duration metric: took 5.611630616s for fixHost
	I1016 18:59:44.111982  337340 start.go:83] releasing machines lock for "ha-556988-m03", held for 5.611732928s
	I1016 18:59:44.112098  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m03
	I1016 18:59:44.134145  337340 out.go:179] * Found network options:
	I1016 18:59:44.137067  337340 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1016 18:59:44.139998  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	W1016 18:59:44.140032  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	W1016 18:59:44.140058  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	W1016 18:59:44.140075  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	I1016 18:59:44.140162  337340 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:59:44.140230  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:44.140496  337340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:59:44.140567  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:44.164491  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:59:44.165069  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:59:44.454001  337340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:59:44.465509  337340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:59:44.465581  337340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:59:44.480708  337340 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:59:44.480733  337340 start.go:495] detecting cgroup driver to use...
	I1016 18:59:44.480764  337340 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 18:59:44.480811  337340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:59:44.509331  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:59:44.557844  337340 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:59:44.557910  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:59:44.588703  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:59:44.608697  337340 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:59:44.891467  337340 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:59:45.246520  337340 docker.go:234] disabling docker service ...
	I1016 18:59:45.246692  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:59:45.273127  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:59:45.348286  337340 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:59:45.631385  337340 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:59:45.856092  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:59:45.872650  337340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:59:45.898496  337340 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:59:45.898570  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.916170  337340 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 18:59:45.916240  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.931066  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.942127  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.952558  337340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:59:45.963182  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.973482  337340 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.986310  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.996358  337340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:59:46.016551  337340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:59:46.027307  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:59:46.234905  337340 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 19:01:16.580381  337340 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.345368285s)
	I1016 19:01:16.580410  337340 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 19:01:16.580469  337340 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 19:01:16.585512  337340 start.go:563] Will wait 60s for crictl version
	I1016 19:01:16.585597  337340 ssh_runner.go:195] Run: which crictl
	I1016 19:01:16.589679  337340 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 19:01:16.622370  337340 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 19:01:16.622451  337340 ssh_runner.go:195] Run: crio --version
	I1016 19:01:16.658490  337340 ssh_runner.go:195] Run: crio --version
	I1016 19:01:16.704130  337340 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 19:01:16.707094  337340 out.go:179]   - env NO_PROXY=192.168.49.2
	I1016 19:01:16.709928  337340 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1016 19:01:16.713018  337340 cli_runner.go:164] Run: docker network inspect ha-556988 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:01:16.729609  337340 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1016 19:01:16.733845  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:01:16.745323  337340 mustload.go:65] Loading cluster: ha-556988
	I1016 19:01:16.745573  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:01:16.745830  337340 cli_runner.go:164] Run: docker container inspect ha-556988 --format={{.State.Status}}
	I1016 19:01:16.768218  337340 host.go:66] Checking if "ha-556988" exists ...
	I1016 19:01:16.768499  337340 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988 for IP: 192.168.49.4
	I1016 19:01:16.768516  337340 certs.go:195] generating shared ca certs ...
	I1016 19:01:16.768531  337340 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:01:16.768657  337340 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 19:01:16.768700  337340 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 19:01:16.768712  337340 certs.go:257] generating profile certs ...
	I1016 19:01:16.768792  337340 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key
	I1016 19:01:16.768863  337340 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.a8cc042e
	I1016 19:01:16.768908  337340 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key
	I1016 19:01:16.768921  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1016 19:01:16.768935  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1016 19:01:16.768951  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1016 19:01:16.768967  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1016 19:01:16.768979  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1016 19:01:16.768993  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1016 19:01:16.769005  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1016 19:01:16.769021  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1016 19:01:16.769073  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 19:01:16.769107  337340 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 19:01:16.769120  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 19:01:16.769171  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 19:01:16.769198  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 19:01:16.769219  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 19:01:16.769266  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:01:16.769303  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /usr/share/ca-certificates/2903122.pem
	I1016 19:01:16.769321  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:01:16.769333  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem -> /usr/share/ca-certificates/290312.pem
	I1016 19:01:16.769395  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 19:01:16.790995  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 19:01:16.889480  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1016 19:01:16.893451  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1016 19:01:16.901926  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1016 19:01:16.905634  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1016 19:01:16.914578  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1016 19:01:16.918356  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1016 19:01:16.926812  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1016 19:01:16.930535  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1016 19:01:16.940123  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1016 19:01:16.944094  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1016 19:01:16.953660  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1016 19:01:16.957601  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1016 19:01:16.966798  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 19:01:16.985414  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 19:01:17.016239  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 19:01:17.039046  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 19:01:17.060181  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1016 19:01:17.080570  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 19:01:17.105243  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 19:01:17.127158  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 19:01:17.146687  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 19:01:17.165827  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 19:01:17.185097  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 19:01:17.205538  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1016 19:01:17.220414  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1016 19:01:17.233996  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1016 19:01:17.248515  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1016 19:01:17.264946  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1016 19:01:17.279635  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1016 19:01:17.293984  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1016 19:01:17.308573  337340 ssh_runner.go:195] Run: openssl version
	I1016 19:01:17.315622  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 19:01:17.326067  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 19:01:17.330066  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 19:01:17.330132  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 19:01:17.373334  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 19:01:17.382328  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 19:01:17.393741  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:01:17.398032  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:01:17.398108  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:01:17.446048  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 19:01:17.454686  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 19:01:17.471186  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 19:01:17.475661  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 19:01:17.475768  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 19:01:17.543984  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 19:01:17.583902  337340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 19:01:17.596353  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 19:01:17.693798  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 19:01:17.818221  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 19:01:17.876853  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 19:01:17.929859  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 19:01:18.028781  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 19:01:18.102665  337340 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1016 19:01:18.102853  337340 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-556988-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 19:01:18.102905  337340 kube-vip.go:115] generating kube-vip config ...
	I1016 19:01:18.102986  337340 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1016 19:01:18.130313  337340 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1016 19:01:18.130424  337340 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1016 19:01:18.130517  337340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 19:01:18.145569  337340 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 19:01:18.145719  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1016 19:01:18.158741  337340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1016 19:01:18.175520  337340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 19:01:18.201069  337340 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1016 19:01:18.223378  337340 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1016 19:01:18.230855  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:01:18.262619  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:01:18.515974  337340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:01:18.534144  337340 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:01:18.534496  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:01:18.537694  337340 out.go:179] * Verifying Kubernetes components...
	I1016 19:01:18.540519  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:01:18.853344  337340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:01:18.870280  337340 kapi.go:59] client config for ha-556988: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key", CAFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1016 19:01:18.870409  337340 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1016 19:01:18.870686  337340 node_ready.go:35] waiting up to 6m0s for node "ha-556988-m03" to be "Ready" ...
	W1016 19:01:20.874310  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:22.875099  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:24.875540  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:27.374249  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:29.375013  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:31.874737  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:34.373989  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:36.375778  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:38.874593  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:40.874828  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:42.875042  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:45.378712  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:47.875029  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:49.875081  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:52.374191  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:54.374870  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:56.874176  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:58.874680  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:00.875335  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:03.374728  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:05.874729  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:07.874820  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:10.374640  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:12.374741  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:14.375254  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:16.874287  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:19.375567  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:21.874303  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:24.374724  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:26.874201  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:28.875139  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:30.875913  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:32.876533  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:35.374093  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:37.374317  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:39.873972  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:41.874678  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:44.374313  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:46.374843  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:48.375268  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:50.874442  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:52.874670  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:54.876042  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:57.374242  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:59.374764  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:01.375629  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:03.874090  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:05.874933  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:07.874988  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:10.375278  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:12.875217  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:15.374125  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:17.374601  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:19.874402  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:21.874761  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:24.373999  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:26.374333  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:28.374800  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:30.375182  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:32.874199  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:34.875038  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:37.374178  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:39.374897  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:41.376724  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:43.875074  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:45.875991  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:48.374682  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:50.374756  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:52.874361  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:54.874691  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:57.375643  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:59.874852  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:02.374714  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:04.874203  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:07.375099  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:09.874992  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:12.375032  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:14.874592  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:17.374337  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:19.375719  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:21.874855  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:23.875005  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:26.374357  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:28.874350  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:31.374814  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:33.375229  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:35.376366  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:37.875161  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:40.374398  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:42.375093  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:44.375288  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:46.874677  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:49.374853  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:51.874402  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:53.874728  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:56.374314  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:58.374922  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:00.398713  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:02.874327  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:04.875407  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:07.374991  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:09.375065  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:11.874375  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:13.875021  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:15.875906  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:18.374204  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:20.375019  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:22.874356  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:24.874622  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:26.874889  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:29.374262  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:31.375054  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:33.408848  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:35.874199  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:37.874785  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:39.875878  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:42.374064  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:44.374403  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:46.874583  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:49.375025  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:51.875263  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:54.374635  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:56.374838  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:58.874718  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:01.374046  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:03.874734  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:06.374348  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:08.874846  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:10.875133  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:13.373809  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:15.374383  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:17.374643  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:19.375329  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:21.874529  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:23.874845  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:26.374245  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:28.874069  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:30.874264  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:32.874477  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:35.374326  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:37.874249  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:39.874482  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:41.875383  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:44.374077  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:46.374372  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:48.874600  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:50.874741  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:53.375464  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:55.875061  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:58.374676  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:00.377657  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:02.384684  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:04.874707  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:06.875283  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:09.374694  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:11.874370  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:14.375095  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:16.874880  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	I1016 19:07:18.870877  337340 node_ready.go:38] duration metric: took 6m0.000146858s for node "ha-556988-m03" to be "Ready" ...
	I1016 19:07:18.873970  337340 out.go:203] 
	W1016 19:07:18.876680  337340 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1016 19:07:18.876697  337340 out.go:285] * 
	W1016 19:07:18.878873  337340 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 19:07:18.881589  337340 out.go:203] 
	
	
	==> CRI-O <==
	Oct 16 18:59:36 ha-556988 crio[667]: time="2025-10-16T18:59:36.033008604Z" level=info msg="Started container" PID=1192 containerID=668681e0d58e70e2edf23bedf32d99282f6a8c38b0aad26000be1021582b8b56 description=default/busybox-7b57f96db7-8m2wv/busybox id=e73f877a-ee31-407d-ac4c-a34a4abcd363 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b5419232b288e867bd15afc6e090129eb958d9e64a346ef88df56d1130e998f
	Oct 16 19:00:06 ha-556988 conmon[1141]: conmon ee0dc742d47b892b93ac <ninfo>: container 1150 exited with status 1
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.415993438Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=75767156-3fb6-42b4-95e2-d34aa2a5bea8 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.41793089Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8b5f67f6-e1d4-4af2-88c2-48fa40df96aa name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.419946292Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=58e00405-99c8-449e-a3ad-5392da1ae41a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.42034836Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.428022662Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.428394313Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3db4041b6d3bc223822867a19715c3e66ed2c364c6b3187c2a59cc7adbe12ade/merged/etc/passwd: no such file or directory"
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.428502664Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3db4041b6d3bc223822867a19715c3e66ed2c364c6b3187c2a59cc7adbe12ade/merged/etc/group: no such file or directory"
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.431213384Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.460374693Z" level=info msg="Created container e24f8a6878f298558b57ff3af4fc74fbb0b1169f9fd531dd73d4e9fdb9db8ec3: kube-system/storage-provisioner/storage-provisioner" id=58e00405-99c8-449e-a3ad-5392da1ae41a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.469592921Z" level=info msg="Starting container: e24f8a6878f298558b57ff3af4fc74fbb0b1169f9fd531dd73d4e9fdb9db8ec3" id=2b8bafce-4d00-4a8d-8c2a-a4b19468c0be name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.472182538Z" level=info msg="Started container" PID=1395 containerID=e24f8a6878f298558b57ff3af4fc74fbb0b1169f9fd531dd73d4e9fdb9db8ec3 description=kube-system/storage-provisioner/storage-provisioner id=2b8bafce-4d00-4a8d-8c2a-a4b19468c0be name=/runtime.v1.RuntimeService/StartContainer sandboxID=3100d564efc4cf0ded67a741f8ebf6a46eeb48236dd12f0b244aa7eb0e1041e1
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.166222167Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.169795977Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.16983204Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.169854342Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.173639915Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.173676863Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.173701159Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.176974688Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.177010775Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.177034324Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.180287168Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.180322968Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	e24f8a6878f29       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       2                   3100d564efc4c       storage-provisioner                 kube-system
	668681e0d58e7       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   3b5419232b288       busybox-7b57f96db7-8m2wv            default
	ee0dc742d47b8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       1                   3100d564efc4c       storage-provisioner                 kube-system
	d2ef4f1c6fd3d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   f62a65ca971ca       coredns-66bc5c9577-bg5gf            kube-system
	fa4be697bf069       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   9a193d0046bea       kindnet-c5vhh                       kube-system
	9f54a6f37bdff       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   219f0f758c58e       coredns-66bc5c9577-qnwbz            kube-system
	676cc3096c2c4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   7 minutes ago       Running             kube-controller-manager   2                   2f36988f94206       kube-controller-manager-ha-556988   kube-system
	66e732aebd424       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   2bc6a25bda869       kube-proxy-l2lf6                    kube-system
	a6a97464c4b58       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  0                   d09d9e3f4595d       kube-vip-ha-556988                  kube-system
	37de0677d0291       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Running             kube-apiserver            1                   ff19c20039a2e       kube-apiserver-ha-556988            kube-system
	13005c03c7e83       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Exited              kube-controller-manager   1                   2f36988f94206       kube-controller-manager-ha-556988   kube-system
	ccd1663977e23       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   16edb5468bfd8       etcd-ha-556988                      kube-system
	0947527fb7c66       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   9953eab01a12a       kube-scheduler-ha-556988            kube-system
	
	
	==> coredns [9f54a6f37bdffe68140f1859804fc0edaf64ea559a101f6caf876000479c9ee1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60434 - 54918 "HINFO IN 3143784560746213008.1236521785684304278. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01077593s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d2ef4f1c6fd3dddc27aea4bdc4cf4ce1714f112fa6b015df816ae128c747014c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37299 - 23942 "HINFO IN 3089919825197669795.1270930252494634912. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013048437s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-556988
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-556988
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=ha-556988
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_53_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:53:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-556988
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:07:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:05:32 +0000   Thu, 16 Oct 2025 18:53:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:05:32 +0000   Thu, 16 Oct 2025 18:53:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:05:32 +0000   Thu, 16 Oct 2025 18:53:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 19:05:32 +0000   Thu, 16 Oct 2025 18:59:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-556988
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                b59e7c71-f015-4beb-a0b1-1db2d92a9291
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-8m2wv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-bg5gf             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-66bc5c9577-qnwbz             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-556988                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-c5vhh                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-556988             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-556988    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-l2lf6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-556988             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-556988                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m54s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m51s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x9 over 13m)      kubelet          Node ha-556988 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-556988 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-556988 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-556988 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-556988 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-556988 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           13m                    node-controller  Node ha-556988 event: Registered Node ha-556988 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-556988 event: Registered Node ha-556988 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-556988 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-556988 event: Registered Node ha-556988 in Controller
	  Normal   RegisteredNode           8m57s                  node-controller  Node ha-556988 event: Registered Node ha-556988 in Controller
	  Normal   NodeHasSufficientMemory  8m30s (x8 over 8m30s)  kubelet          Node ha-556988 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m30s (x8 over 8m30s)  kubelet          Node ha-556988 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m30s (x8 over 8m30s)  kubelet          Node ha-556988 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m50s                  node-controller  Node ha-556988 event: Registered Node ha-556988 in Controller
	  Normal   RegisteredNode           7m46s                  node-controller  Node ha-556988 event: Registered Node ha-556988 in Controller
	
	
	Name:               ha-556988-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-556988-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=ha-556988
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_16T18_54_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:54:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-556988-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:07:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:07:17 +0000   Thu, 16 Oct 2025 18:58:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:07:17 +0000   Thu, 16 Oct 2025 18:58:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:07:17 +0000   Thu, 16 Oct 2025 18:58:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 19:07:17 +0000   Thu, 16 Oct 2025 18:58:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-556988-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                7a9bc276-8208-4c5e-a8a7-151b962ba6f2
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-g6s82                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-556988-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-9mrmf                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-556988-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-556988-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-mx9hc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-556988-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-556988-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 7m25s                  kube-proxy       
	  Normal   RegisteredNode           12m                    node-controller  Node ha-556988-m02 event: Registered Node ha-556988-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-556988-m02 event: Registered Node ha-556988-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-556988-m02 event: Registered Node ha-556988-m02 in Controller
	  Warning  CgroupV1                 9m35s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 9m35s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     9m34s (x8 over 9m35s)  kubelet          Node ha-556988-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  9m34s (x8 over 9m35s)  kubelet          Node ha-556988-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m34s (x8 over 9m35s)  kubelet          Node ha-556988-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeNotReady             9m8s                   node-controller  Node ha-556988-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           8m57s                  node-controller  Node ha-556988-m02 event: Registered Node ha-556988-m02 in Controller
	  Normal   Starting                 8m27s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m26s (x8 over 8m26s)  kubelet          Node ha-556988-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m26s (x8 over 8m26s)  kubelet          Node ha-556988-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m26s (x8 over 8m26s)  kubelet          Node ha-556988-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m50s                  node-controller  Node ha-556988-m02 event: Registered Node ha-556988-m02 in Controller
	  Normal   RegisteredNode           7m46s                  node-controller  Node ha-556988-m02 event: Registered Node ha-556988-m02 in Controller
	
	
	Name:               ha-556988-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-556988-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=ha-556988
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_16T18_56_35_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:56:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-556988-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:58:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 16 Oct 2025 18:57:16 +0000   Thu, 16 Oct 2025 19:00:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 16 Oct 2025 18:57:16 +0000   Thu, 16 Oct 2025 19:00:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 16 Oct 2025 18:57:16 +0000   Thu, 16 Oct 2025 19:00:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 16 Oct 2025 18:57:16 +0000   Thu, 16 Oct 2025 19:00:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-556988-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                3974a7c6-147c-48e8-b522-87d967a9ed5f
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-flq9x       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-2j2kg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-556988-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-556988-m04 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-556988-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                node-controller  Node ha-556988-m04 event: Registered Node ha-556988-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-556988-m04 event: Registered Node ha-556988-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-556988-m04 event: Registered Node ha-556988-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-556988-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m57s              node-controller  Node ha-556988-m04 event: Registered Node ha-556988-m04 in Controller
	  Normal   RegisteredNode           7m50s              node-controller  Node ha-556988-m04 event: Registered Node ha-556988-m04 in Controller
	  Normal   RegisteredNode           7m46s              node-controller  Node ha-556988-m04 event: Registered Node ha-556988-m04 in Controller
	  Normal   NodeNotReady             7m                 node-controller  Node ha-556988-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.510048] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035217] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.777829] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.353148] kauditd_printk_skb: 36 callbacks suppressed
	[Oct16 17:39] FS-Cache: Duplicate cookie detected
	[  +0.000746] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001056] FS-Cache: O-cookie d=00000000a1708097{9P.session} n=00000000c48db394
	[  +0.001150] FS-Cache: O-key=[10] '34323935323233313231'
	[  +0.000794] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000971] FS-Cache: N-cookie d=00000000a1708097{9P.session} n=0000000008f2874d
	[  +0.001104] FS-Cache: N-key=[10] '34323935323233313231'
	[Oct16 17:40] hrtimer: interrupt took 46683506 ns
	[Oct16 18:30] kauditd_printk_skb: 8 callbacks suppressed
	[Oct16 18:32] overlayfs: idmapped layers are currently not supported
	[  +0.067059] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct16 18:38] overlayfs: idmapped layers are currently not supported
	[Oct16 18:39] overlayfs: idmapped layers are currently not supported
	[Oct16 18:53] overlayfs: idmapped layers are currently not supported
	[Oct16 18:54] overlayfs: idmapped layers are currently not supported
	[Oct16 18:55] overlayfs: idmapped layers are currently not supported
	[Oct16 18:56] overlayfs: idmapped layers are currently not supported
	[Oct16 18:57] overlayfs: idmapped layers are currently not supported
	[Oct16 18:58] overlayfs: idmapped layers are currently not supported
	[Oct16 18:59] overlayfs: idmapped layers are currently not supported
	[ +38.025144] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ccd1663977e230bbda3cae69e035a19bb725c3f88efd4340e2acdb82e35b17b4] <==
	{"level":"info","ts":"2025-10-16T19:01:17.904633Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"info","ts":"2025-10-16T19:01:17.939403Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"info","ts":"2025-10-16T19:01:17.945273Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"warn","ts":"2025-10-16T19:07:22.986850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:33184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:07:23.040670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:33206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:07:23.066338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:33214","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-16T19:07:23.096019Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(6591995946286876817 12593026477526642892)"}
	{"level":"info","ts":"2025-10-16T19:07:23.098104Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"dd9f3debc3328b7e","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-10-16T19:07:23.098168Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"warn","ts":"2025-10-16T19:07:23.098451Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"info","ts":"2025-10-16T19:07:23.098481Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"warn","ts":"2025-10-16T19:07:23.098727Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"info","ts":"2025-10-16T19:07:23.098964Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"warn","ts":"2025-10-16T19:07:23.098803Z","caller":"etcdserver/server.go:718","msg":"rejected Raft message from removed member","local-member-id":"aec36adc501070cc","removed-member-id":"dd9f3debc3328b7e"}
	{"level":"warn","ts":"2025-10-16T19:07:23.099083Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2025-10-16T19:07:23.099062Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"warn","ts":"2025-10-16T19:07:23.099359Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"dd9f3debc3328b7e","error":"context canceled"}
	{"level":"warn","ts":"2025-10-16T19:07:23.099448Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"dd9f3debc3328b7e","error":"failed to read dd9f3debc3328b7e on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-10-16T19:07:23.099492Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"warn","ts":"2025-10-16T19:07:23.099641Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"dd9f3debc3328b7e","error":"context canceled"}
	{"level":"info","ts":"2025-10-16T19:07:23.099700Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"info","ts":"2025-10-16T19:07:23.099747Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"info","ts":"2025-10-16T19:07:23.099811Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"warn","ts":"2025-10-16T19:07:23.147994Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"dd9f3debc3328b7e"}
	{"level":"warn","ts":"2025-10-16T19:07:23.148642Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"dd9f3debc3328b7e"}
	
	
	==> kernel <==
	 19:07:29 up  1:49,  0 user,  load average: 0.50, 1.01, 1.51
	Linux ha-556988 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fa4be697bf0693026672a5f6c9fe73e79415080f58163a0e09e3473403170716] <==
	I1016 19:06:56.160406       1 main.go:324] Node ha-556988-m02 has CIDR [10.244.1.0/24] 
	I1016 19:06:56.160461       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1016 19:06:56.160473       1 main.go:324] Node ha-556988-m03 has CIDR [10.244.2.0/24] 
	I1016 19:07:06.166319       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 19:07:06.166355       1 main.go:301] handling current node
	I1016 19:07:06.166371       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1016 19:07:06.166377       1 main.go:324] Node ha-556988-m02 has CIDR [10.244.1.0/24] 
	I1016 19:07:06.166532       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1016 19:07:06.166546       1 main.go:324] Node ha-556988-m03 has CIDR [10.244.2.0/24] 
	I1016 19:07:06.166618       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1016 19:07:06.166629       1 main.go:324] Node ha-556988-m04 has CIDR [10.244.3.0/24] 
	I1016 19:07:16.159868       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1016 19:07:16.159905       1 main.go:324] Node ha-556988-m04 has CIDR [10.244.3.0/24] 
	I1016 19:07:16.160092       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 19:07:16.160107       1 main.go:301] handling current node
	I1016 19:07:16.160120       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1016 19:07:16.160126       1 main.go:324] Node ha-556988-m02 has CIDR [10.244.1.0/24] 
	I1016 19:07:16.160187       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1016 19:07:16.160200       1 main.go:324] Node ha-556988-m03 has CIDR [10.244.2.0/24] 
	I1016 19:07:26.160029       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 19:07:26.160063       1 main.go:301] handling current node
	I1016 19:07:26.160079       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1016 19:07:26.160084       1 main.go:324] Node ha-556988-m02 has CIDR [10.244.1.0/24] 
	I1016 19:07:26.160318       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1016 19:07:26.160339       1 main.go:324] Node ha-556988-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [37de0677d02917c07b70727749f73f2b0b33bfa000e9e137a54da309d14e7ae7] <==
	I1016 18:59:34.894194       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1016 18:59:34.896075       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1016 18:59:34.896820       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3 192.168.49.4]
	I1016 18:59:34.911822       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1016 18:59:34.911849       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1016 18:59:34.920290       1 cache.go:39] Caches are synced for autoregister controller
	I1016 18:59:34.943461       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1016 18:59:34.950382       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1016 18:59:34.950416       1 policy_source.go:240] refreshing policies
	I1016 18:59:34.957319       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 18:59:34.959365       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1016 18:59:34.965217       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 18:59:34.965371       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1016 18:59:34.971502       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1016 18:59:35.000033       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 18:59:35.031357       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1016 18:59:35.038221       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1016 18:59:35.053357       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 18:59:37.014352       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1016 18:59:37.014434       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	W1016 18:59:38.259757       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3 192.168.49.4]
	I1016 18:59:40.018709       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 18:59:40.263262       1 controller.go:667] quota admission added evaluator for: deployments.apps
	W1016 18:59:58.250950       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1016 19:00:04.488288       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [13005c03c7e831233e329dc3df5f63331cf23a4ab71c78d67d200baaff30b9bf] <==
	I1016 18:59:02.476495       1 serving.go:386] Generated self-signed cert in-memory
	I1016 18:59:04.091611       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1016 18:59:04.091720       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:59:04.093637       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1016 18:59:04.094321       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1016 18:59:04.094476       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 18:59:04.094572       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1016 18:59:20.022685       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [676cc3096c2c428c05ab34bcbe56aece39203ffe11f9216bd113fe47eebe8d46] <==
	I1016 18:59:39.953791       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-556988-m03"
	I1016 18:59:39.955638       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1016 18:59:39.954135       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-556988-m04"
	I1016 18:59:39.955915       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1016 18:59:39.956300       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1016 18:59:39.956794       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1016 18:59:39.958437       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1016 18:59:39.958540       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1016 18:59:39.958616       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1016 18:59:39.958670       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1016 18:59:39.958704       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1016 18:59:39.958656       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1016 18:59:39.964429       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 18:59:39.964622       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1016 18:59:39.970819       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1016 18:59:39.972126       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1016 18:59:39.973735       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1016 18:59:39.980202       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 18:59:39.983741       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 18:59:39.983826       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 18:59:39.983857       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 18:59:39.984304       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1016 18:59:39.988621       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:05:33.126031       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-zdc2h"
	E1016 19:05:33.383262       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-proxy [66e732aebd424e1c2b5fe5fa62678b4f60db51b175af2e4bdf9c05d13a3604b1] <==
	I1016 18:59:36.431382       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:59:37.074112       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:59:37.404317       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:59:37.420237       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1016 18:59:37.440936       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:59:37.547567       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:59:37.547677       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:59:37.566424       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:59:37.566839       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:59:37.567055       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:59:37.568313       1 config.go:200] "Starting service config controller"
	I1016 18:59:37.569180       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:59:37.569272       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:59:37.569301       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:59:37.569349       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:59:37.569432       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:59:37.570116       1 config.go:309] "Starting node config controller"
	I1016 18:59:37.593325       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:59:37.593349       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:59:37.670251       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 18:59:37.670355       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 18:59:37.670385       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0947527fb7c6600575f80d864636e177c1330efa7ab3caff116116cd0d07fe91] <==
	E1016 18:59:19.210127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 18:59:20.223711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 18:59:20.272552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:59:20.286900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 18:59:21.024708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1016 18:59:23.850262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 18:59:25.366156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 18:59:25.440106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 18:59:25.455207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 18:59:25.526976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 18:59:25.693902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 18:59:25.715863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 18:59:26.150506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 18:59:26.525981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 18:59:27.199538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 18:59:27.780409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 18:59:28.329859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 18:59:28.766926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 18:59:29.490851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 18:59:29.827336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:59:30.023162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 18:59:30.629590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 18:59:31.265247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 18:59:33.627332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1016 18:59:46.572262       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.941000     803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b32400f6-5ec6-4a22-87fc-4b9fb8b25976-lib-modules\") pod \"kube-proxy-l2lf6\" (UID: \"b32400f6-5ec6-4a22-87fc-4b9fb8b25976\") " pod="kube-system/kube-proxy-l2lf6"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.941076     803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b32400f6-5ec6-4a22-87fc-4b9fb8b25976-xtables-lock\") pod \"kube-proxy-l2lf6\" (UID: \"b32400f6-5ec6-4a22-87fc-4b9fb8b25976\") " pod="kube-system/kube-proxy-l2lf6"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.941166     803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aadf11dc-a51d-4828-9ae1-0295e92d1c95-xtables-lock\") pod \"kindnet-c5vhh\" (UID: \"aadf11dc-a51d-4828-9ae1-0295e92d1c95\") " pod="kube-system/kindnet-c5vhh"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.941256     803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aadf11dc-a51d-4828-9ae1-0295e92d1c95-lib-modules\") pod \"kindnet-c5vhh\" (UID: \"aadf11dc-a51d-4828-9ae1-0295e92d1c95\") " pod="kube-system/kindnet-c5vhh"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.941277     803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/916b69a5-8ee0-43ee-87fd-9a88caebbec8-tmp\") pod \"storage-provisioner\" (UID: \"916b69a5-8ee0-43ee-87fd-9a88caebbec8\") " pod="kube-system/storage-provisioner"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.941319     803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/aadf11dc-a51d-4828-9ae1-0295e92d1c95-cni-cfg\") pod \"kindnet-c5vhh\" (UID: \"aadf11dc-a51d-4828-9ae1-0295e92d1c95\") " pod="kube-system/kindnet-c5vhh"
	Oct 16 18:59:34 ha-556988 kubelet[803]: E1016 18:59:34.964270     803 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-vip-ha-556988\" already exists" pod="kube-system/kube-vip-ha-556988"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.964316     803 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-ha-556988"
	Oct 16 18:59:34 ha-556988 kubelet[803]: E1016 18:59:34.976099     803 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-556988\" already exists" pod="kube-system/etcd-ha-556988"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.976140     803 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-556988"
	Oct 16 18:59:34 ha-556988 kubelet[803]: E1016 18:59:34.987350     803 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-556988\" already exists" pod="kube-system/kube-apiserver-ha-556988"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.987392     803 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-556988"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.999523     803 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 16 18:59:35 ha-556988 kubelet[803]: E1016 18:59:35.015087     803 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-556988\" already exists" pod="kube-system/kube-controller-manager-ha-556988"
	Oct 16 18:59:35 ha-556988 kubelet[803]: I1016 18:59:35.039384     803 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-556988"
	Oct 16 18:59:35 ha-556988 kubelet[803]: I1016 18:59:35.039591     803 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-556988"
	Oct 16 18:59:35 ha-556988 kubelet[803]: I1016 18:59:35.064156     803 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 16 18:59:35 ha-556988 kubelet[803]: I1016 18:59:35.176886     803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-556988" podStartSLOduration=0.17686523 podStartE2EDuration="176.86523ms" podCreationTimestamp="2025-10-16 18:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:59:35.14812186 +0000 UTC m=+36.282512675" watchObservedRunningTime="2025-10-16 18:59:35.17686523 +0000 UTC m=+36.311256037"
	Oct 16 18:59:35 ha-556988 kubelet[803]: I1016 18:59:35.286741     803 scope.go:117] "RemoveContainer" containerID="13005c03c7e831233e329dc3df5f63331cf23a4ab71c78d67d200baaff30b9bf"
	Oct 16 18:59:35 ha-556988 kubelet[803]: W1016 18:59:35.357678     803 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/crio-9a193d0046bea11d1febf065e134855191406dfa3aec11b726dd228067189c7b WatchSource:0}: Error finding container 9a193d0046bea11d1febf065e134855191406dfa3aec11b726dd228067189c7b: Status 404 returned error can't find the container with id 9a193d0046bea11d1febf065e134855191406dfa3aec11b726dd228067189c7b
	Oct 16 18:59:35 ha-556988 kubelet[803]: W1016 18:59:35.401613     803 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/crio-219f0f758c58e5e2e91f77c7c3e14e6652dec28447814307cca604d39430e73a WatchSource:0}: Error finding container 219f0f758c58e5e2e91f77c7c3e14e6652dec28447814307cca604d39430e73a: Status 404 returned error can't find the container with id 219f0f758c58e5e2e91f77c7c3e14e6652dec28447814307cca604d39430e73a
	Oct 16 18:59:35 ha-556988 kubelet[803]: W1016 18:59:35.717419     803 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/crio-3b5419232b288e867bd15afc6e090129eb958d9e64a346ef88df56d1130e998f WatchSource:0}: Error finding container 3b5419232b288e867bd15afc6e090129eb958d9e64a346ef88df56d1130e998f: Status 404 returned error can't find the container with id 3b5419232b288e867bd15afc6e090129eb958d9e64a346ef88df56d1130e998f
	Oct 16 18:59:59 ha-556988 kubelet[803]: E1016 18:59:59.007146     803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df58f74bd2685b895cef12ab16695cf6c46768695a2a39165b75d08991392dd9\": container with ID starting with df58f74bd2685b895cef12ab16695cf6c46768695a2a39165b75d08991392dd9 not found: ID does not exist" containerID="df58f74bd2685b895cef12ab16695cf6c46768695a2a39165b75d08991392dd9"
	Oct 16 18:59:59 ha-556988 kubelet[803]: I1016 18:59:59.007669     803 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="df58f74bd2685b895cef12ab16695cf6c46768695a2a39165b75d08991392dd9" err="rpc error: code = NotFound desc = could not find container \"df58f74bd2685b895cef12ab16695cf6c46768695a2a39165b75d08991392dd9\": container with ID starting with df58f74bd2685b895cef12ab16695cf6c46768695a2a39165b75d08991392dd9 not found: ID does not exist"
	Oct 16 19:00:06 ha-556988 kubelet[803]: I1016 19:00:06.414711     803 scope.go:117] "RemoveContainer" containerID="ee0dc742d47b892b93aca268c637f4c52645442b0c386d0be82fcedaaa23bc41"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-556988 -n ha-556988
helpers_test.go:269: (dbg) Run:  kubectl --context ha-556988 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-d75ps
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-556988 describe pod busybox-7b57f96db7-d75ps
helpers_test.go:290: (dbg) kubectl --context ha-556988 describe pod busybox-7b57f96db7-d75ps:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-d75ps
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jzmh8 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-jzmh8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  117s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  7s    default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  117s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s    default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (8.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-556988" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-556988\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-556988\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-556988\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\
"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"Sta
ticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-556988
helpers_test.go:243: (dbg) docker inspect ha-556988:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000",
	        "Created": "2025-10-16T18:53:20.826320924Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 337466,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:58:51.979830748Z",
	            "FinishedAt": "2025-10-16T18:58:51.377562063Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/hosts",
	        "LogPath": "/var/lib/docker/containers/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000-json.log",
	        "Name": "/ha-556988",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-556988:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-556988",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000",
	                "LowerDir": "/var/lib/docker/overlay2/b9e7c420d869ffe9f26b11e5160a4483ad085f1084b3df4806e005b1dcac6796-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b9e7c420d869ffe9f26b11e5160a4483ad085f1084b3df4806e005b1dcac6796/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b9e7c420d869ffe9f26b11e5160a4483ad085f1084b3df4806e005b1dcac6796/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b9e7c420d869ffe9f26b11e5160a4483ad085f1084b3df4806e005b1dcac6796/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-556988",
	                "Source": "/var/lib/docker/volumes/ha-556988/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-556988",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-556988",
	                "name.minikube.sigs.k8s.io": "ha-556988",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "065c5d4e8a096d5f9ffdf9b63e7c2cb496f2eb5bb12369ce1f2bda60d9a79e64",
	            "SandboxKey": "/var/run/docker/netns/065c5d4e8a09",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-556988": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:e9:5a:29:59:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7adcf17f22baf4ae9b9dbf2b45e75904ea1540233e225aef4731989fd57a7fcc",
	                    "EndpointID": "6a0543cc77855a1155f456a458b934e2cd29f8314af96acb35727ae6ed5a96c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-556988",
	                        "ee539784e727"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-556988 -n ha-556988
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-556988 logs -n 25: (1.361076554s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-556988 ssh -n ha-556988-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m02 sudo cat /home/docker/cp-test_ha-556988-m03_ha-556988-m02.txt                                         │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ cp      │ ha-556988 cp ha-556988-m03:/home/docker/cp-test.txt ha-556988-m04:/home/docker/cp-test_ha-556988-m03_ha-556988-m04.txt               │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m04 sudo cat /home/docker/cp-test_ha-556988-m03_ha-556988-m04.txt                                         │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ cp      │ ha-556988 cp testdata/cp-test.txt ha-556988-m04:/home/docker/cp-test.txt                                                             │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ cp      │ ha-556988 cp ha-556988-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2002313520/001/cp-test_ha-556988-m04.txt │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ cp      │ ha-556988 cp ha-556988-m04:/home/docker/cp-test.txt ha-556988:/home/docker/cp-test_ha-556988-m04_ha-556988.txt                       │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988 sudo cat /home/docker/cp-test_ha-556988-m04_ha-556988.txt                                                 │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ cp      │ ha-556988 cp ha-556988-m04:/home/docker/cp-test.txt ha-556988-m02:/home/docker/cp-test_ha-556988-m04_ha-556988-m02.txt               │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m02 sudo cat /home/docker/cp-test_ha-556988-m04_ha-556988-m02.txt                                         │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ cp      │ ha-556988 cp ha-556988-m04:/home/docker/cp-test.txt ha-556988-m03:/home/docker/cp-test_ha-556988-m04_ha-556988-m03.txt               │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ ssh     │ ha-556988 ssh -n ha-556988-m03 sudo cat /home/docker/cp-test_ha-556988-m04_ha-556988-m03.txt                                         │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ node    │ ha-556988 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:57 UTC │
	│ node    │ ha-556988 node start m02 --alsologtostderr -v 5                                                                                      │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:57 UTC │ 16 Oct 25 18:58 UTC │
	│ node    │ ha-556988 node list --alsologtostderr -v 5                                                                                           │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:58 UTC │                     │
	│ stop    │ ha-556988 stop --alsologtostderr -v 5                                                                                                │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:58 UTC │ 16 Oct 25 18:58 UTC │
	│ start   │ ha-556988 start --wait true --alsologtostderr -v 5                                                                                   │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 18:58 UTC │                     │
	│ node    │ ha-556988 node list --alsologtostderr -v 5                                                                                           │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 19:07 UTC │                     │
	│ node    │ ha-556988 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-556988 │ jenkins │ v1.37.0 │ 16 Oct 25 19:07 UTC │ 16 Oct 25 19:07 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:58:51
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:58:51.718625  337340 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:58:51.718820  337340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:58:51.718832  337340 out.go:374] Setting ErrFile to fd 2...
	I1016 18:58:51.718837  337340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:58:51.719085  337340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:58:51.719452  337340 out.go:368] Setting JSON to false
	I1016 18:58:51.720287  337340 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6061,"bootTime":1760635071,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 18:58:51.720360  337340 start.go:141] virtualization:  
	I1016 18:58:51.723622  337340 out.go:179] * [ha-556988] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 18:58:51.727453  337340 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:58:51.727561  337340 notify.go:220] Checking for updates...
	I1016 18:58:51.733207  337340 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:58:51.736137  337340 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:58:51.738974  337340 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 18:58:51.741951  337340 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 18:58:51.744907  337340 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:58:51.748268  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:58:51.748399  337340 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:58:51.772958  337340 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 18:58:51.773087  337340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:58:51.833709  337340 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-16 18:58:51.824777239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:58:51.833825  337340 docker.go:318] overlay module found
	I1016 18:58:51.836939  337340 out.go:179] * Using the docker driver based on existing profile
	I1016 18:58:51.839798  337340 start.go:305] selected driver: docker
	I1016 18:58:51.839818  337340 start.go:925] validating driver "docker" against &{Name:ha-556988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:58:51.839961  337340 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:58:51.840070  337340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:58:51.894329  337340 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-16 18:58:51.884487993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:58:51.894716  337340 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:58:51.894754  337340 cni.go:84] Creating CNI manager for ""
	I1016 18:58:51.894821  337340 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1016 18:58:51.894871  337340 start.go:349] cluster config:
	{Name:ha-556988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:58:51.898184  337340 out.go:179] * Starting "ha-556988" primary control-plane node in "ha-556988" cluster
	I1016 18:58:51.901075  337340 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:58:51.904106  337340 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:58:51.906904  337340 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:58:51.906960  337340 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 18:58:51.906971  337340 cache.go:58] Caching tarball of preloaded images
	I1016 18:58:51.906995  337340 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:58:51.907065  337340 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 18:58:51.907074  337340 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:58:51.907213  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:58:51.927032  337340 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:58:51.927054  337340 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:58:51.927071  337340 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:58:51.927094  337340 start.go:360] acquireMachinesLock for ha-556988: {Name:mk71c3a6201989099f6bf114603feb8455c41f5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:58:51.927153  337340 start.go:364] duration metric: took 41.945µs to acquireMachinesLock for "ha-556988"
	I1016 18:58:51.927187  337340 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:58:51.927198  337340 fix.go:54] fixHost starting: 
	I1016 18:58:51.927452  337340 cli_runner.go:164] Run: docker container inspect ha-556988 --format={{.State.Status}}
	I1016 18:58:51.944496  337340 fix.go:112] recreateIfNeeded on ha-556988: state=Stopped err=<nil>
	W1016 18:58:51.944531  337340 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:58:51.947809  337340 out.go:252] * Restarting existing docker container for "ha-556988" ...
	I1016 18:58:51.947886  337340 cli_runner.go:164] Run: docker start ha-556988
	I1016 18:58:52.211064  337340 cli_runner.go:164] Run: docker container inspect ha-556988 --format={{.State.Status}}
	I1016 18:58:52.238130  337340 kic.go:430] container "ha-556988" state is running.
	I1016 18:58:52.238496  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988
	I1016 18:58:52.265254  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:58:52.265525  337340 machine.go:93] provisionDockerMachine start ...
	I1016 18:58:52.265595  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:52.289105  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:58:52.289561  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1016 18:58:52.289576  337340 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:58:52.290191  337340 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 18:58:55.440597  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988
	
	I1016 18:58:55.440631  337340 ubuntu.go:182] provisioning hostname "ha-556988"
	I1016 18:58:55.440701  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:55.458200  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:58:55.458510  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1016 18:58:55.458528  337340 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-556988 && echo "ha-556988" | sudo tee /etc/hostname
	I1016 18:58:55.615084  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988
	
	I1016 18:58:55.615165  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:55.633608  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:58:55.633925  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1016 18:58:55.633950  337340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-556988' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-556988/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-556988' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:58:55.781429  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:58:55.781454  337340 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 18:58:55.781481  337340 ubuntu.go:190] setting up certificates
	I1016 18:58:55.781490  337340 provision.go:84] configureAuth start
	I1016 18:58:55.781555  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988
	I1016 18:58:55.798617  337340 provision.go:143] copyHostCerts
	I1016 18:58:55.798664  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:58:55.798709  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 18:58:55.798730  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:58:55.798812  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 18:58:55.798915  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:58:55.798938  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 18:58:55.798949  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:58:55.798989  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 18:58:55.799046  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:58:55.799068  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 18:58:55.799078  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:58:55.799112  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 18:58:55.799198  337340 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.ha-556988 san=[127.0.0.1 192.168.49.2 ha-556988 localhost minikube]
	I1016 18:58:56.377628  337340 provision.go:177] copyRemoteCerts
	I1016 18:58:56.377703  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:58:56.377743  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:56.397097  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:58:56.500593  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1016 18:58:56.500663  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:58:56.518370  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1016 18:58:56.518433  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 18:58:56.536547  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1016 18:58:56.536628  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1016 18:58:56.555074  337340 provision.go:87] duration metric: took 773.569729ms to configureAuth
	I1016 18:58:56.555099  337340 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:58:56.555326  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:58:56.555445  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:56.572643  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:58:56.572965  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1016 18:58:56.572986  337340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:58:56.890339  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:58:56.890428  337340 machine.go:96] duration metric: took 4.624892872s to provisionDockerMachine
	I1016 18:58:56.890454  337340 start.go:293] postStartSetup for "ha-556988" (driver="docker")
	I1016 18:58:56.890480  337340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:58:56.890607  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:58:56.890683  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:56.913382  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:58:57.017075  337340 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:58:57.021857  337340 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:58:57.021887  337340 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:58:57.021899  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 18:58:57.021965  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 18:58:57.022045  337340 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 18:58:57.022052  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /etc/ssl/certs/2903122.pem
	I1016 18:58:57.022160  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:58:57.030852  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:58:57.048968  337340 start.go:296] duration metric: took 158.482858ms for postStartSetup
	I1016 18:58:57.049157  337340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:58:57.049222  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:57.066845  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:58:57.166118  337340 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:58:57.170752  337340 fix.go:56] duration metric: took 5.243547354s for fixHost
	I1016 18:58:57.170779  337340 start.go:83] releasing machines lock for "ha-556988", held for 5.243610027s
	I1016 18:58:57.170862  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988
	I1016 18:58:57.187672  337340 ssh_runner.go:195] Run: cat /version.json
	I1016 18:58:57.187699  337340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:58:57.187723  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:57.187757  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:58:57.206208  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:58:57.213346  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:58:57.391366  337340 ssh_runner.go:195] Run: systemctl --version
	I1016 18:58:57.397910  337340 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:58:57.434230  337340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:58:57.439686  337340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:58:57.439757  337340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:58:57.447828  337340 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:58:57.447851  337340 start.go:495] detecting cgroup driver to use...
	I1016 18:58:57.447886  337340 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 18:58:57.447952  337340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:58:57.463944  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:58:57.477406  337340 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:58:57.477468  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:58:57.493693  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:58:57.507255  337340 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:58:57.614114  337340 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:58:57.729976  337340 docker.go:234] disabling docker service ...
	I1016 18:58:57.730050  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:58:57.745940  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:58:57.758869  337340 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:58:57.875693  337340 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:58:57.984271  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:58:57.997324  337340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:58:58.012287  337340 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:58:58.012387  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.023645  337340 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 18:58:58.023740  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.036244  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.046489  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.055569  337340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:58:58.065264  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.075123  337340 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.084654  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:58:58.094603  337340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:58:58.102554  337340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:58:58.110013  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:58:58.218071  337340 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:58:58.347916  337340 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:58:58.348026  337340 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:58:58.351852  337340 start.go:563] Will wait 60s for crictl version
	I1016 18:58:58.351953  337340 ssh_runner.go:195] Run: which crictl
	I1016 18:58:58.355554  337340 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:58:58.382893  337340 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:58:58.383032  337340 ssh_runner.go:195] Run: crio --version
	I1016 18:58:58.410837  337340 ssh_runner.go:195] Run: crio --version
	I1016 18:58:58.446345  337340 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:58:58.449238  337340 cli_runner.go:164] Run: docker network inspect ha-556988 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:58:58.465498  337340 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1016 18:58:58.469406  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:58:58.479415  337340 kubeadm.go:883] updating cluster {Name:ha-556988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:58:58.479566  337340 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:58:58.479620  337340 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:58:58.516159  337340 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:58:58.516181  337340 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:58:58.516239  337340 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:58:58.543999  337340 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:58:58.544030  337340 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:58:58.544040  337340 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1016 18:58:58.544140  337340 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-556988 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:58:58.544225  337340 ssh_runner.go:195] Run: crio config
	I1016 18:58:58.618937  337340 cni.go:84] Creating CNI manager for ""
	I1016 18:58:58.618957  337340 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1016 18:58:58.618981  337340 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:58:58.619008  337340 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-556988 NodeName:ha-556988 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:58:58.619133  337340 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-556988"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:58:58.619160  337340 kube-vip.go:115] generating kube-vip config ...
	I1016 18:58:58.619222  337340 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1016 18:58:58.631579  337340 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:58:58.631697  337340 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1016 18:58:58.631769  337340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:58:58.640083  337340 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:58:58.640188  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1016 18:58:58.648089  337340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1016 18:58:58.661375  337340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:58:58.674583  337340 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1016 18:58:58.687345  337340 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1016 18:58:58.700772  337340 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1016 18:58:58.704503  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:58:58.714276  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:58:58.833486  337340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:58:58.851263  337340 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988 for IP: 192.168.49.2
	I1016 18:58:58.851288  337340 certs.go:195] generating shared ca certs ...
	I1016 18:58:58.851306  337340 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:58:58.851471  337340 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 18:58:58.851524  337340 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 18:58:58.851537  337340 certs.go:257] generating profile certs ...
	I1016 18:58:58.851633  337340 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key
	I1016 18:58:58.851666  337340 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.1de6c797
	I1016 18:58:58.851690  337340 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt.1de6c797 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1016 18:58:59.152876  337340 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt.1de6c797 ...
	I1016 18:58:59.152960  337340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt.1de6c797: {Name:mk3d22e55d5c37c04716dc4d1ee3cbc4538fbdc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:58:59.153223  337340 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.1de6c797 ...
	I1016 18:58:59.153265  337340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.1de6c797: {Name:mkda3eb1676258b3c7a46448934b59023d353a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:58:59.153432  337340 certs.go:382] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt.1de6c797 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt
	I1016 18:58:59.153636  337340 certs.go:386] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.1de6c797 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key
	I1016 18:58:59.153853  337340 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key
	I1016 18:58:59.153891  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1016 18:58:59.153923  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1016 18:58:59.153965  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1016 18:58:59.153998  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1016 18:58:59.154028  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1016 18:58:59.154076  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1016 18:58:59.154112  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1016 18:58:59.154143  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1016 18:58:59.154239  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 18:58:59.154300  337340 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 18:58:59.154325  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 18:58:59.154381  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 18:58:59.154435  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:58:59.154491  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 18:58:59.154609  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:58:59.154690  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:58:59.154737  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem -> /usr/share/ca-certificates/290312.pem
	I1016 18:58:59.154771  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /usr/share/ca-certificates/2903122.pem
	I1016 18:58:59.155500  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:58:59.174654  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 18:58:59.194053  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:58:59.220036  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 18:58:59.241089  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1016 18:58:59.259308  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 18:58:59.276555  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:58:59.293855  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:58:59.311467  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:58:59.329708  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 18:58:59.347304  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 18:58:59.364602  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:58:59.377635  337340 ssh_runner.go:195] Run: openssl version
	I1016 18:58:59.384255  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 18:58:59.393733  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 18:58:59.397737  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 18:58:59.397824  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 18:58:59.438696  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:58:59.446893  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:58:59.455572  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:58:59.459600  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:58:59.459668  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:58:59.500823  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:58:59.509003  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 18:58:59.520724  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 18:58:59.528394  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 18:58:59.528467  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 18:58:59.578056  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 18:58:59.586838  337340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:58:59.594144  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:58:59.638647  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:58:59.694080  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:58:59.765575  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:58:59.865472  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:58:59.931581  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:58:59.986682  337340 kubeadm.go:400] StartCluster: {Name:ha-556988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:58:59.986889  337340 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:58:59.986987  337340 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:59:00.020883  337340 cri.go:89] found id: "a6a97464c4b58734820a4c747fbaa58980bfcb3cdc5b94d0a49804bd9ecaf2d2"
	I1016 18:59:00.020964  337340 cri.go:89] found id: "37de0677d02917c07b70727749f73f2b0b33bfa000e9e137a54da309d14e7ae7"
	I1016 18:59:00.020984  337340 cri.go:89] found id: "13005c03c7e831233e329dc3df5f63331cf23a4ab71c78d67d200baaff30b9bf"
	I1016 18:59:00.021007  337340 cri.go:89] found id: "ccd1663977e230bbda3cae69e035a19bb725c3f88efd4340e2acdb82e35b17b4"
	I1016 18:59:00.021041  337340 cri.go:89] found id: "0947527fb7c6600575f80d864636e177c1330efa7ab3caff116116cd0d07fe91"
	I1016 18:59:00.021071  337340 cri.go:89] found id: ""
	I1016 18:59:00.021222  337340 ssh_runner.go:195] Run: sudo runc list -f json
	W1016 18:59:00.048970  337340 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:59:00Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:59:00.049191  337340 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:59:00.064913  337340 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 18:59:00.065020  337340 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 18:59:00.065128  337340 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 18:59:00.081513  337340 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:59:00.082142  337340 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-556988" does not appear in /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:59:00.082376  337340 kubeconfig.go:62] /home/jenkins/minikube-integration/21738-288457/kubeconfig needs updating (will repair): [kubeconfig missing "ha-556988" cluster setting kubeconfig missing "ha-556988" context setting]
	I1016 18:59:00.082852  337340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:59:00.083778  337340 kapi.go:59] client config for ha-556988: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key", CAFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1016 18:59:00.084642  337340 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1016 18:59:00.084775  337340 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1016 18:59:00.084800  337340 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1016 18:59:00.084835  337340 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1016 18:59:00.084861  337340 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1016 18:59:00.084885  337340 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1016 18:59:00.085481  337340 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 18:59:00.133777  337340 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1016 18:59:00.133865  337340 kubeadm.go:601] duration metric: took 68.819342ms to restartPrimaryControlPlane
	I1016 18:59:00.133892  337340 kubeadm.go:402] duration metric: took 147.219085ms to StartCluster
	I1016 18:59:00.133962  337340 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:59:00.134087  337340 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:59:00.134991  337340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:59:00.135381  337340 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:59:00.135451  337340 start.go:241] waiting for startup goroutines ...
	I1016 18:59:00.135503  337340 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:59:00.136478  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:00.165207  337340 out.go:179] * Enabled addons: 
	I1016 18:59:00.168421  337340 addons.go:514] duration metric: took 32.907014ms for enable addons: enabled=[]
	I1016 18:59:00.168517  337340 start.go:246] waiting for cluster config update ...
	I1016 18:59:00.168542  337340 start.go:255] writing updated cluster config ...
	I1016 18:59:00.191362  337340 out.go:203] 
	I1016 18:59:00.209821  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:00.209961  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:00.213495  337340 out.go:179] * Starting "ha-556988-m02" control-plane node in "ha-556988" cluster
	I1016 18:59:00.216452  337340 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:59:00.223747  337340 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:59:00.226672  337340 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:59:00.226714  337340 cache.go:58] Caching tarball of preloaded images
	I1016 18:59:00.226842  337340 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 18:59:00.226852  337340 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:59:00.227106  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:00.227394  337340 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:59:00.266622  337340 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:59:00.266645  337340 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:59:00.266659  337340 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:59:00.266685  337340 start.go:360] acquireMachinesLock for ha-556988-m02: {Name:mkb742ea24d411e97f6bd75961598d91ba358bd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:59:00.266743  337340 start.go:364] duration metric: took 41.445µs to acquireMachinesLock for "ha-556988-m02"
	I1016 18:59:00.266766  337340 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:59:00.266772  337340 fix.go:54] fixHost starting: m02
	I1016 18:59:00.267061  337340 cli_runner.go:164] Run: docker container inspect ha-556988-m02 --format={{.State.Status}}
	I1016 18:59:00.297319  337340 fix.go:112] recreateIfNeeded on ha-556988-m02: state=Stopped err=<nil>
	W1016 18:59:00.297360  337340 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:59:00.300819  337340 out.go:252] * Restarting existing docker container for "ha-556988-m02" ...
	I1016 18:59:00.300940  337340 cli_runner.go:164] Run: docker start ha-556988-m02
	I1016 18:59:00.708144  337340 cli_runner.go:164] Run: docker container inspect ha-556988-m02 --format={{.State.Status}}
	I1016 18:59:00.733543  337340 kic.go:430] container "ha-556988-m02" state is running.
	I1016 18:59:00.733902  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m02
	I1016 18:59:00.760804  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:00.761309  337340 machine.go:93] provisionDockerMachine start ...
	I1016 18:59:00.761403  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:00.808146  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:00.808685  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1016 18:59:00.808701  337340 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:59:00.809303  337340 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40522->127.0.0.1:33183: read: connection reset by peer
	I1016 18:59:04.034070  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988-m02
	
	I1016 18:59:04.034139  337340 ubuntu.go:182] provisioning hostname "ha-556988-m02"
	I1016 18:59:04.034243  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:04.063655  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:04.063975  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1016 18:59:04.063993  337340 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-556988-m02 && echo "ha-556988-m02" | sudo tee /etc/hostname
	I1016 18:59:04.267030  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988-m02
	
	I1016 18:59:04.267113  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:04.300780  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:04.301103  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1016 18:59:04.301127  337340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-556988-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-556988-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-556988-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:59:04.469711  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:59:04.469796  337340 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 18:59:04.469828  337340 ubuntu.go:190] setting up certificates
	I1016 18:59:04.469864  337340 provision.go:84] configureAuth start
	I1016 18:59:04.469974  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m02
	I1016 18:59:04.508993  337340 provision.go:143] copyHostCerts
	I1016 18:59:04.509035  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:59:04.509067  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 18:59:04.509074  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:59:04.509305  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 18:59:04.509422  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:59:04.509441  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 18:59:04.509446  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:59:04.509496  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 18:59:04.509545  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:59:04.509562  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 18:59:04.509566  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:59:04.509591  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 18:59:04.509649  337340 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.ha-556988-m02 san=[127.0.0.1 192.168.49.3 ha-556988-m02 localhost minikube]
	I1016 18:59:05.303068  337340 provision.go:177] copyRemoteCerts
	I1016 18:59:05.303142  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:59:05.303195  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:05.322174  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 18:59:05.428054  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1016 18:59:05.428132  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:59:05.461825  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1016 18:59:05.461888  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 18:59:05.487317  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1016 18:59:05.487378  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1016 18:59:05.516798  337340 provision.go:87] duration metric: took 1.046901762s to configureAuth
	I1016 18:59:05.516822  337340 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:59:05.517061  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:05.517252  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:05.546833  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:05.547150  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1016 18:59:05.547168  337340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:59:05.937754  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:59:05.937782  337340 machine.go:96] duration metric: took 5.176458229s to provisionDockerMachine
	I1016 18:59:05.937802  337340 start.go:293] postStartSetup for "ha-556988-m02" (driver="docker")
	I1016 18:59:05.937814  337340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:59:05.937890  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:59:05.937937  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:05.955324  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 18:59:06.057291  337340 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:59:06.060623  337340 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:59:06.060656  337340 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:59:06.060668  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 18:59:06.060728  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 18:59:06.060812  337340 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 18:59:06.060824  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /etc/ssl/certs/2903122.pem
	I1016 18:59:06.060930  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:59:06.068899  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:59:06.087392  337340 start.go:296] duration metric: took 149.572621ms for postStartSetup
	I1016 18:59:06.087476  337340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:59:06.087533  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:06.109477  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 18:59:06.222886  337340 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:59:06.229852  337340 fix.go:56] duration metric: took 5.963072953s for fixHost
	I1016 18:59:06.229883  337340 start.go:83] releasing machines lock for "ha-556988-m02", held for 5.963130679s
	I1016 18:59:06.229963  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m02
	I1016 18:59:06.266689  337340 out.go:179] * Found network options:
	I1016 18:59:06.273332  337340 out.go:179]   - NO_PROXY=192.168.49.2
	W1016 18:59:06.276561  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	W1016 18:59:06.276606  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	I1016 18:59:06.276683  337340 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:59:06.276749  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:06.276754  337340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:59:06.276816  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m02
	I1016 18:59:06.317825  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 18:59:06.323025  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m02/id_rsa Username:docker}
	I1016 18:59:06.671873  337340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:59:06.677594  337340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:59:06.677732  337340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:59:06.690261  337340 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:59:06.690335  337340 start.go:495] detecting cgroup driver to use...
	I1016 18:59:06.690384  337340 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 18:59:06.690471  337340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:59:06.714650  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:59:06.733867  337340 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:59:06.733929  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:59:06.752522  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:59:06.775910  337340 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:59:06.992043  337340 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:59:07.227541  337340 docker.go:234] disabling docker service ...
	I1016 18:59:07.227607  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:59:07.250512  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:59:07.276078  337340 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:59:07.484122  337340 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:59:07.729089  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:59:07.767438  337340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:59:07.809637  337340 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:59:07.809753  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.832720  337340 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 18:59:07.832842  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.859881  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.889284  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.901694  337340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:59:07.922354  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.941649  337340 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.951572  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:07.961513  337340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:59:07.970666  337340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:59:07.978742  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:59:08.323908  337340 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:59:09.667321  337340 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.343330778s)
	I1016 18:59:09.667346  337340 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:59:09.667400  337340 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:59:09.677469  337340 start.go:563] Will wait 60s for crictl version
	I1016 18:59:09.677549  337340 ssh_runner.go:195] Run: which crictl
	I1016 18:59:09.683697  337340 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:59:09.731470  337340 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:59:09.731621  337340 ssh_runner.go:195] Run: crio --version
	I1016 18:59:09.782976  337340 ssh_runner.go:195] Run: crio --version
	I1016 18:59:09.844144  337340 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:59:09.847254  337340 out.go:179]   - env NO_PROXY=192.168.49.2
	I1016 18:59:09.850158  337340 cli_runner.go:164] Run: docker network inspect ha-556988 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:59:09.881787  337340 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1016 18:59:09.886123  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:59:09.903709  337340 mustload.go:65] Loading cluster: ha-556988
	I1016 18:59:09.903953  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:09.904211  337340 cli_runner.go:164] Run: docker container inspect ha-556988 --format={{.State.Status}}
	I1016 18:59:09.944289  337340 host.go:66] Checking if "ha-556988" exists ...
	I1016 18:59:09.944603  337340 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988 for IP: 192.168.49.3
	I1016 18:59:09.944620  337340 certs.go:195] generating shared ca certs ...
	I1016 18:59:09.944638  337340 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:59:09.944779  337340 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 18:59:09.944832  337340 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 18:59:09.944844  337340 certs.go:257] generating profile certs ...
	I1016 18:59:09.944939  337340 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key
	I1016 18:59:09.945027  337340 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.2ae973c7
	I1016 18:59:09.945079  337340 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key
	I1016 18:59:09.945092  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1016 18:59:09.945106  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1016 18:59:09.945127  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1016 18:59:09.945166  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1016 18:59:09.945182  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1016 18:59:09.945202  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1016 18:59:09.945213  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1016 18:59:09.945233  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1016 18:59:09.945291  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 18:59:09.945327  337340 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 18:59:09.945341  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 18:59:09.945370  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 18:59:09.945403  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:59:09.945429  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 18:59:09.945482  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:59:09.945516  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:59:09.945534  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem -> /usr/share/ca-certificates/290312.pem
	I1016 18:59:09.945549  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /usr/share/ca-certificates/2903122.pem
	I1016 18:59:09.945612  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:59:09.972941  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:59:10.097521  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1016 18:59:10.102513  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1016 18:59:10.114147  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1016 18:59:10.119117  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1016 18:59:10.130126  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1016 18:59:10.134419  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1016 18:59:10.144627  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1016 18:59:10.148520  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1016 18:59:10.157921  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1016 18:59:10.161674  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1016 18:59:10.171535  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1016 18:59:10.175229  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1016 18:59:10.184604  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:59:10.206415  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 18:59:10.228102  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:59:10.258566  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 18:59:10.283952  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1016 18:59:10.306580  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 18:59:10.329415  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:59:10.348969  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:59:10.368321  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:59:10.387180  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 18:59:10.408929  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 18:59:10.429114  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1016 18:59:10.444245  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1016 18:59:10.458197  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1016 18:59:10.472176  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1016 18:59:10.485882  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1016 18:59:10.499848  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1016 18:59:10.515126  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1016 18:59:10.528667  337340 ssh_runner.go:195] Run: openssl version
	I1016 18:59:10.535446  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:59:10.544186  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:59:10.548237  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:59:10.548342  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:59:10.591605  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:59:10.600300  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 18:59:10.608985  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 18:59:10.612817  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 18:59:10.612923  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 18:59:10.655658  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 18:59:10.664193  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 18:59:10.673263  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 18:59:10.677209  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 18:59:10.677288  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 18:59:10.718855  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:59:10.726829  337340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:59:10.730876  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:59:10.773328  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:59:10.815232  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:59:10.858016  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:59:10.899603  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:59:10.942507  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:59:10.988343  337340 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1016 18:59:10.988480  337340 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-556988-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:59:10.988535  337340 kube-vip.go:115] generating kube-vip config ...
	I1016 18:59:10.988601  337340 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1016 18:59:11.002298  337340 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:59:11.002415  337340 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1016 18:59:11.002494  337340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:59:11.011536  337340 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:59:11.011651  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1016 18:59:11.021905  337340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1016 18:59:11.037889  337340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:59:11.051536  337340 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1016 18:59:11.069953  337340 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1016 18:59:11.074152  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:59:11.086164  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:59:11.252847  337340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:59:11.266706  337340 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:59:11.267048  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:11.273634  337340 out.go:179] * Verifying Kubernetes components...
	I1016 18:59:11.276480  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:59:11.421023  337340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:59:11.436654  337340 kapi.go:59] client config for ha-556988: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key", CAFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1016 18:59:11.436746  337340 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1016 18:59:11.437099  337340 node_ready.go:35] waiting up to 6m0s for node "ha-556988-m02" to be "Ready" ...
	I1016 18:59:34.862749  337340 node_ready.go:49] node "ha-556988-m02" is "Ready"
	I1016 18:59:34.862783  337340 node_ready.go:38] duration metric: took 23.425601966s for node "ha-556988-m02" to be "Ready" ...
	I1016 18:59:34.862797  337340 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:59:34.862859  337340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:59:34.885329  337340 api_server.go:72] duration metric: took 23.618240686s to wait for apiserver process to appear ...
	I1016 18:59:34.885358  337340 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:59:34.885377  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:34.897604  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:34.897640  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:35.386323  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:35.400088  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:35.400123  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:35.885493  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:35.987319  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:35.987359  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:36.385456  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:36.412352  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:36.412390  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:36.885906  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:36.906763  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:36.906805  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:37.386256  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:37.404132  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:37.404163  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:37.885488  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:37.894320  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:59:37.894358  337340 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:59:38.385493  337340 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 18:59:38.394925  337340 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1016 18:59:38.395973  337340 api_server.go:141] control plane version: v1.34.1
	I1016 18:59:38.396011  337340 api_server.go:131] duration metric: took 3.51063495s to wait for apiserver health ...
	I1016 18:59:38.396021  337340 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:59:38.401864  337340 system_pods.go:59] 26 kube-system pods found
	I1016 18:59:38.401911  337340 system_pods.go:61] "coredns-66bc5c9577-bg5gf" [e74de9d2-b737-42ff-8b64-feac035b2a70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:59:38.401923  337340 system_pods.go:61] "coredns-66bc5c9577-qnwbz" [774c649b-c0e4-4cdb-b2e8-cf72f5904899] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:59:38.401929  337340 system_pods.go:61] "etcd-ha-556988" [3e9c14ad-eae5-477f-b7c0-9dcdaf895b65] Running
	I1016 18:59:38.401935  337340 system_pods.go:61] "etcd-ha-556988-m02" [3f391bcc-813d-4db1-9aaa-258f230517fc] Running
	I1016 18:59:38.401940  337340 system_pods.go:61] "etcd-ha-556988-m03" [ea908ff8-f137-460f-9bf4-17345b1c9a66] Running
	I1016 18:59:38.401952  337340 system_pods.go:61] "kindnet-9mrmf" [45836450-4eac-49b9-a0cf-8d5a07061558] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1016 18:59:38.401957  337340 system_pods.go:61] "kindnet-c5vhh" [aadf11dc-a51d-4828-9ae1-0295e92d1c95] Running
	I1016 18:59:38.401968  337340 system_pods.go:61] "kindnet-flq9x" [aea5627f-11fc-4f3a-a968-1ca5c98d36b5] Running
	I1016 18:59:38.401972  337340 system_pods.go:61] "kindnet-qj4cl" [ef19450a-7ec3-4ccf-a5e9-c7937fd3339d] Running
	I1016 18:59:38.401979  337340 system_pods.go:61] "kube-apiserver-ha-556988" [24a555d8-f3f0-4b1c-b576-6ca1aff25a54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:59:38.401988  337340 system_pods.go:61] "kube-apiserver-ha-556988-m02" [1fc44835-ea0a-40c3-8042-f1b7e4c5c317] Running
	I1016 18:59:38.401994  337340 system_pods.go:61] "kube-apiserver-ha-556988-m03" [4c29b8ab-29b7-4dbb-8c29-18837ac4113e] Running
	I1016 18:59:38.402001  337340 system_pods.go:61] "kube-controller-manager-ha-556988" [cc4765f2-5a4b-44ce-b5da-77313d0027c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:59:38.402018  337340 system_pods.go:61] "kube-controller-manager-ha-556988-m02" [5a169a8b-1028-4629-a4b9-9cad3c765757] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:59:38.402024  337340 system_pods.go:61] "kube-controller-manager-ha-556988-m03" [ec16f7f4-acee-4d97-8cf3-20c0f326b08b] Running
	I1016 18:59:38.402030  337340 system_pods.go:61] "kube-proxy-2j2kg" [26525910-8639-4ca0-a113-d428683bd112] Running
	I1016 18:59:38.402037  337340 system_pods.go:61] "kube-proxy-dqhtm" [eee1ee0e-f145-4298-afe6-1ca41a084680] Running
	I1016 18:59:38.402041  337340 system_pods.go:61] "kube-proxy-l2lf6" [b32400f6-5ec6-4a22-87fc-4b9fb8b25976] Running
	I1016 18:59:38.402049  337340 system_pods.go:61] "kube-proxy-mx9hc" [64ee00b3-06f0-4db8-91a2-cb2bb4b25b64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1016 18:59:38.402060  337340 system_pods.go:61] "kube-scheduler-ha-556988" [37cb1ddb-9782-4e54-9793-8f2a07fe78e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:59:38.402068  337340 system_pods.go:61] "kube-scheduler-ha-556988-m02" [d819d0c4-766f-44c5-8bb9-b8f35e3d8d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:59:38.402073  337340 system_pods.go:61] "kube-scheduler-ha-556988-m03" [33286dd3-5abd-484d-abbb-8cb29c08d3ee] Running
	I1016 18:59:38.402077  337340 system_pods.go:61] "kube-vip-ha-556988" [0c7ea0da-ea3e-4fff-a76c-98b473255af9] Running
	I1016 18:59:38.402081  337340 system_pods.go:61] "kube-vip-ha-556988-m02" [850d312a-8987-4b0f-bb9e-a393a24d9b49] Running
	I1016 18:59:38.402085  337340 system_pods.go:61] "kube-vip-ha-556988-m03" [85c7549d-c836-473b-916a-e4091d8daaa4] Running
	I1016 18:59:38.402089  337340 system_pods.go:61] "storage-provisioner" [916b69a5-8ee0-43ee-87fd-9a88caebbec8] Running
	I1016 18:59:38.402095  337340 system_pods.go:74] duration metric: took 6.067311ms to wait for pod list to return data ...
	I1016 18:59:38.402109  337340 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:59:38.406892  337340 default_sa.go:45] found service account: "default"
	I1016 18:59:38.406919  337340 default_sa.go:55] duration metric: took 4.803341ms for default service account to be created ...
	I1016 18:59:38.406930  337340 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 18:59:38.413271  337340 system_pods.go:86] 26 kube-system pods found
	I1016 18:59:38.413316  337340 system_pods.go:89] "coredns-66bc5c9577-bg5gf" [e74de9d2-b737-42ff-8b64-feac035b2a70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:59:38.413326  337340 system_pods.go:89] "coredns-66bc5c9577-qnwbz" [774c649b-c0e4-4cdb-b2e8-cf72f5904899] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:59:38.413332  337340 system_pods.go:89] "etcd-ha-556988" [3e9c14ad-eae5-477f-b7c0-9dcdaf895b65] Running
	I1016 18:59:38.413337  337340 system_pods.go:89] "etcd-ha-556988-m02" [3f391bcc-813d-4db1-9aaa-258f230517fc] Running
	I1016 18:59:38.413343  337340 system_pods.go:89] "etcd-ha-556988-m03" [ea908ff8-f137-460f-9bf4-17345b1c9a66] Running
	I1016 18:59:38.413350  337340 system_pods.go:89] "kindnet-9mrmf" [45836450-4eac-49b9-a0cf-8d5a07061558] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1016 18:59:38.413355  337340 system_pods.go:89] "kindnet-c5vhh" [aadf11dc-a51d-4828-9ae1-0295e92d1c95] Running
	I1016 18:59:38.413367  337340 system_pods.go:89] "kindnet-flq9x" [aea5627f-11fc-4f3a-a968-1ca5c98d36b5] Running
	I1016 18:59:38.413379  337340 system_pods.go:89] "kindnet-qj4cl" [ef19450a-7ec3-4ccf-a5e9-c7937fd3339d] Running
	I1016 18:59:38.413390  337340 system_pods.go:89] "kube-apiserver-ha-556988" [24a555d8-f3f0-4b1c-b576-6ca1aff25a54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:59:38.413396  337340 system_pods.go:89] "kube-apiserver-ha-556988-m02" [1fc44835-ea0a-40c3-8042-f1b7e4c5c317] Running
	I1016 18:59:38.413406  337340 system_pods.go:89] "kube-apiserver-ha-556988-m03" [4c29b8ab-29b7-4dbb-8c29-18837ac4113e] Running
	I1016 18:59:38.413413  337340 system_pods.go:89] "kube-controller-manager-ha-556988" [cc4765f2-5a4b-44ce-b5da-77313d0027c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:59:38.413425  337340 system_pods.go:89] "kube-controller-manager-ha-556988-m02" [5a169a8b-1028-4629-a4b9-9cad3c765757] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:59:38.413430  337340 system_pods.go:89] "kube-controller-manager-ha-556988-m03" [ec16f7f4-acee-4d97-8cf3-20c0f326b08b] Running
	I1016 18:59:38.413435  337340 system_pods.go:89] "kube-proxy-2j2kg" [26525910-8639-4ca0-a113-d428683bd112] Running
	I1016 18:59:38.413440  337340 system_pods.go:89] "kube-proxy-dqhtm" [eee1ee0e-f145-4298-afe6-1ca41a084680] Running
	I1016 18:59:38.413444  337340 system_pods.go:89] "kube-proxy-l2lf6" [b32400f6-5ec6-4a22-87fc-4b9fb8b25976] Running
	I1016 18:59:38.413456  337340 system_pods.go:89] "kube-proxy-mx9hc" [64ee00b3-06f0-4db8-91a2-cb2bb4b25b64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1016 18:59:38.413467  337340 system_pods.go:89] "kube-scheduler-ha-556988" [37cb1ddb-9782-4e54-9793-8f2a07fe78e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:59:38.413474  337340 system_pods.go:89] "kube-scheduler-ha-556988-m02" [d819d0c4-766f-44c5-8bb9-b8f35e3d8d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:59:38.413486  337340 system_pods.go:89] "kube-scheduler-ha-556988-m03" [33286dd3-5abd-484d-abbb-8cb29c08d3ee] Running
	I1016 18:59:38.413491  337340 system_pods.go:89] "kube-vip-ha-556988" [0c7ea0da-ea3e-4fff-a76c-98b473255af9] Running
	I1016 18:59:38.413495  337340 system_pods.go:89] "kube-vip-ha-556988-m02" [850d312a-8987-4b0f-bb9e-a393a24d9b49] Running
	I1016 18:59:38.413498  337340 system_pods.go:89] "kube-vip-ha-556988-m03" [85c7549d-c836-473b-916a-e4091d8daaa4] Running
	I1016 18:59:38.413502  337340 system_pods.go:89] "storage-provisioner" [916b69a5-8ee0-43ee-87fd-9a88caebbec8] Running
	I1016 18:59:38.413515  337340 system_pods.go:126] duration metric: took 6.570484ms to wait for k8s-apps to be running ...
	I1016 18:59:38.413533  337340 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 18:59:38.413612  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:59:38.430123  337340 system_svc.go:56] duration metric: took 16.57935ms WaitForService to wait for kubelet
	I1016 18:59:38.430164  337340 kubeadm.go:586] duration metric: took 27.163079108s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:59:38.430184  337340 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:59:38.453899  337340 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 18:59:38.453938  337340 node_conditions.go:123] node cpu capacity is 2
	I1016 18:59:38.453950  337340 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 18:59:38.453964  337340 node_conditions.go:123] node cpu capacity is 2
	I1016 18:59:38.453969  337340 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 18:59:38.453977  337340 node_conditions.go:123] node cpu capacity is 2
	I1016 18:59:38.453981  337340 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 18:59:38.453986  337340 node_conditions.go:123] node cpu capacity is 2
	I1016 18:59:38.453993  337340 node_conditions.go:105] duration metric: took 23.803362ms to run NodePressure ...
	I1016 18:59:38.454005  337340 start.go:241] waiting for startup goroutines ...
	I1016 18:59:38.454041  337340 start.go:255] writing updated cluster config ...
	I1016 18:59:38.457719  337340 out.go:203] 
	I1016 18:59:38.460987  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:38.461187  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:38.464790  337340 out.go:179] * Starting "ha-556988-m03" control-plane node in "ha-556988" cluster
	I1016 18:59:38.468557  337340 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:59:38.471645  337340 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:59:38.474579  337340 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:59:38.474688  337340 cache.go:58] Caching tarball of preloaded images
	I1016 18:59:38.474647  337340 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:59:38.475030  337340 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 18:59:38.475073  337340 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:59:38.475235  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:38.500130  337340 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:59:38.500149  337340 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:59:38.500163  337340 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:59:38.500186  337340 start.go:360] acquireMachinesLock for ha-556988-m03: {Name:mk34d9a60e195460efb0e14fede3a8b24d8e28a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:59:38.500240  337340 start.go:364] duration metric: took 38.999µs to acquireMachinesLock for "ha-556988-m03"
	I1016 18:59:38.500259  337340 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:59:38.500264  337340 fix.go:54] fixHost starting: m03
	I1016 18:59:38.500516  337340 cli_runner.go:164] Run: docker container inspect ha-556988-m03 --format={{.State.Status}}
	I1016 18:59:38.520771  337340 fix.go:112] recreateIfNeeded on ha-556988-m03: state=Stopped err=<nil>
	W1016 18:59:38.520796  337340 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:59:38.523984  337340 out.go:252] * Restarting existing docker container for "ha-556988-m03" ...
	I1016 18:59:38.524069  337340 cli_runner.go:164] Run: docker start ha-556988-m03
	I1016 18:59:38.865706  337340 cli_runner.go:164] Run: docker container inspect ha-556988-m03 --format={{.State.Status}}
	I1016 18:59:38.891919  337340 kic.go:430] container "ha-556988-m03" state is running.
	I1016 18:59:38.895965  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m03
	I1016 18:59:38.924344  337340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/config.json ...
	I1016 18:59:38.924714  337340 machine.go:93] provisionDockerMachine start ...
	I1016 18:59:38.924805  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:38.953535  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:38.953854  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1016 18:59:38.954163  337340 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:59:38.955105  337340 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 18:59:42.156520  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988-m03
	
	I1016 18:59:42.156559  337340 ubuntu.go:182] provisioning hostname "ha-556988-m03"
	I1016 18:59:42.156649  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:42.195862  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:42.196197  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1016 18:59:42.196217  337340 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-556988-m03 && echo "ha-556988-m03" | sudo tee /etc/hostname
	I1016 18:59:42.415761  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-556988-m03
	
	I1016 18:59:42.415927  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:42.448329  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:42.448631  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1016 18:59:42.448648  337340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-556988-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-556988-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-556988-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:59:42.655633  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:59:42.655699  337340 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 18:59:42.655755  337340 ubuntu.go:190] setting up certificates
	I1016 18:59:42.655798  337340 provision.go:84] configureAuth start
	I1016 18:59:42.655888  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m03
	I1016 18:59:42.682731  337340 provision.go:143] copyHostCerts
	I1016 18:59:42.682774  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:59:42.682809  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 18:59:42.682816  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 18:59:42.682894  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 18:59:42.683003  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:59:42.683029  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 18:59:42.683034  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 18:59:42.683063  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 18:59:42.683113  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:59:42.683134  337340 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 18:59:42.683138  337340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 18:59:42.683162  337340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 18:59:42.683208  337340 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.ha-556988-m03 san=[127.0.0.1 192.168.49.4 ha-556988-m03 localhost minikube]
	I1016 18:59:42.986072  337340 provision.go:177] copyRemoteCerts
	I1016 18:59:42.986191  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:59:42.986266  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:43.009339  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:59:43.190424  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1016 18:59:43.190488  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 18:59:43.234240  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1016 18:59:43.234303  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1016 18:59:43.271524  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1016 18:59:43.271634  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1016 18:59:43.309031  337340 provision.go:87] duration metric: took 653.205044ms to configureAuth
	I1016 18:59:43.309101  337340 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:59:43.309396  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:59:43.309551  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:43.341419  337340 main.go:141] libmachine: Using SSH client type: native
	I1016 18:59:43.341745  337340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1016 18:59:43.341761  337340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:59:43.818670  337340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:59:43.818698  337340 machine.go:96] duration metric: took 4.89396612s to provisionDockerMachine
	I1016 18:59:43.818717  337340 start.go:293] postStartSetup for "ha-556988-m03" (driver="docker")
	I1016 18:59:43.818729  337340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:59:43.818800  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:59:43.818847  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:43.843907  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:59:43.949206  337340 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:59:43.952687  337340 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:59:43.952714  337340 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:59:43.952725  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 18:59:43.952777  337340 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 18:59:43.952858  337340 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 18:59:43.952870  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /etc/ssl/certs/2903122.pem
	I1016 18:59:43.952966  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:59:43.960926  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 18:59:43.978806  337340 start.go:296] duration metric: took 160.073239ms for postStartSetup
	I1016 18:59:43.978931  337340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:59:43.979022  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:43.996302  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:59:44.105727  337340 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:59:44.111903  337340 fix.go:56] duration metric: took 5.611630616s for fixHost
	I1016 18:59:44.111982  337340 start.go:83] releasing machines lock for "ha-556988-m03", held for 5.611732928s
	I1016 18:59:44.112098  337340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m03
	I1016 18:59:44.134145  337340 out.go:179] * Found network options:
	I1016 18:59:44.137067  337340 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1016 18:59:44.139998  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	W1016 18:59:44.140032  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	W1016 18:59:44.140058  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	W1016 18:59:44.140075  337340 proxy.go:120] fail to check proxy env: Error ip not in block
	I1016 18:59:44.140162  337340 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:59:44.140230  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:44.140496  337340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:59:44.140567  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:59:44.164491  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:59:44.165069  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:59:44.454001  337340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:59:44.465509  337340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:59:44.465581  337340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:59:44.480708  337340 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:59:44.480733  337340 start.go:495] detecting cgroup driver to use...
	I1016 18:59:44.480764  337340 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 18:59:44.480811  337340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:59:44.509331  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:59:44.557844  337340 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:59:44.557910  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:59:44.588703  337340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:59:44.608697  337340 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:59:44.891467  337340 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:59:45.246520  337340 docker.go:234] disabling docker service ...
	I1016 18:59:45.246692  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:59:45.273127  337340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:59:45.348286  337340 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:59:45.631385  337340 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:59:45.856092  337340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:59:45.872650  337340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:59:45.898496  337340 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:59:45.898570  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.916170  337340 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 18:59:45.916240  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.931066  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.942127  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.952558  337340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:59:45.963182  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.973482  337340 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.986310  337340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:59:45.996358  337340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:59:46.016551  337340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:59:46.027307  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:59:46.234905  337340 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 19:01:16.580381  337340 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.345368285s)
	I1016 19:01:16.580410  337340 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 19:01:16.580469  337340 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 19:01:16.585512  337340 start.go:563] Will wait 60s for crictl version
	I1016 19:01:16.585597  337340 ssh_runner.go:195] Run: which crictl
	I1016 19:01:16.589679  337340 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 19:01:16.622370  337340 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 19:01:16.622451  337340 ssh_runner.go:195] Run: crio --version
	I1016 19:01:16.658490  337340 ssh_runner.go:195] Run: crio --version
	I1016 19:01:16.704130  337340 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 19:01:16.707094  337340 out.go:179]   - env NO_PROXY=192.168.49.2
	I1016 19:01:16.709928  337340 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1016 19:01:16.713018  337340 cli_runner.go:164] Run: docker network inspect ha-556988 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:01:16.729609  337340 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1016 19:01:16.733845  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:01:16.745323  337340 mustload.go:65] Loading cluster: ha-556988
	I1016 19:01:16.745573  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:01:16.745830  337340 cli_runner.go:164] Run: docker container inspect ha-556988 --format={{.State.Status}}
	I1016 19:01:16.768218  337340 host.go:66] Checking if "ha-556988" exists ...
	I1016 19:01:16.768499  337340 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988 for IP: 192.168.49.4
	I1016 19:01:16.768516  337340 certs.go:195] generating shared ca certs ...
	I1016 19:01:16.768531  337340 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:01:16.768657  337340 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 19:01:16.768700  337340 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 19:01:16.768712  337340 certs.go:257] generating profile certs ...
	I1016 19:01:16.768792  337340 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key
	I1016 19:01:16.768863  337340 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key.a8cc042e
	I1016 19:01:16.768908  337340 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key
	I1016 19:01:16.768921  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1016 19:01:16.768935  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1016 19:01:16.768951  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1016 19:01:16.768967  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1016 19:01:16.768979  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1016 19:01:16.768993  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1016 19:01:16.769005  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1016 19:01:16.769021  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1016 19:01:16.769073  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 19:01:16.769107  337340 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 19:01:16.769120  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 19:01:16.769171  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 19:01:16.769198  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 19:01:16.769219  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 19:01:16.769266  337340 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:01:16.769303  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> /usr/share/ca-certificates/2903122.pem
	I1016 19:01:16.769321  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:01:16.769333  337340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem -> /usr/share/ca-certificates/290312.pem
	I1016 19:01:16.769395  337340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 19:01:16.790995  337340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 19:01:16.889480  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1016 19:01:16.893451  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1016 19:01:16.901926  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1016 19:01:16.905634  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1016 19:01:16.914578  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1016 19:01:16.918356  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1016 19:01:16.926812  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1016 19:01:16.930535  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1016 19:01:16.940123  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1016 19:01:16.944094  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1016 19:01:16.953660  337340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1016 19:01:16.957601  337340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1016 19:01:16.966798  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 19:01:16.985414  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 19:01:17.016239  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 19:01:17.039046  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 19:01:17.060181  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1016 19:01:17.080570  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 19:01:17.105243  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 19:01:17.127158  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 19:01:17.146687  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 19:01:17.165827  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 19:01:17.185097  337340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 19:01:17.205538  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1016 19:01:17.220414  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1016 19:01:17.233996  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1016 19:01:17.248515  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1016 19:01:17.264946  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1016 19:01:17.279635  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1016 19:01:17.293984  337340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1016 19:01:17.308573  337340 ssh_runner.go:195] Run: openssl version
	I1016 19:01:17.315622  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 19:01:17.326067  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 19:01:17.330066  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 19:01:17.330132  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 19:01:17.373334  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 19:01:17.382328  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 19:01:17.393741  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:01:17.398032  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:01:17.398108  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:01:17.446048  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 19:01:17.454686  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 19:01:17.471186  337340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 19:01:17.475661  337340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 19:01:17.475768  337340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 19:01:17.543984  337340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 19:01:17.583902  337340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 19:01:17.596353  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 19:01:17.693798  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 19:01:17.818221  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 19:01:17.876853  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 19:01:17.929859  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 19:01:18.028781  337340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 19:01:18.102665  337340 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1016 19:01:18.102853  337340 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-556988-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-556988 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 19:01:18.102905  337340 kube-vip.go:115] generating kube-vip config ...
	I1016 19:01:18.102986  337340 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1016 19:01:18.130313  337340 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1016 19:01:18.130424  337340 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1016 19:01:18.130517  337340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 19:01:18.145569  337340 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 19:01:18.145719  337340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1016 19:01:18.158741  337340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1016 19:01:18.175520  337340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 19:01:18.201069  337340 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1016 19:01:18.223378  337340 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1016 19:01:18.230855  337340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:01:18.262619  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:01:18.515974  337340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:01:18.534144  337340 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:01:18.534496  337340 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:01:18.537694  337340 out.go:179] * Verifying Kubernetes components...
	I1016 19:01:18.540519  337340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:01:18.853344  337340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:01:18.870280  337340 kapi.go:59] client config for ha-556988: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/ha-556988/client.key", CAFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1016 19:01:18.870409  337340 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1016 19:01:18.870686  337340 node_ready.go:35] waiting up to 6m0s for node "ha-556988-m03" to be "Ready" ...
	W1016 19:01:20.874310  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:22.875099  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:24.875540  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:27.374249  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:29.375013  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:31.874737  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:34.373989  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:36.375778  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:38.874593  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:40.874828  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:42.875042  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:45.378712  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:47.875029  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:49.875081  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:52.374191  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:54.374870  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:56.874176  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:01:58.874680  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:00.875335  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:03.374728  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:05.874729  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:07.874820  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:10.374640  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:12.374741  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:14.375254  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:16.874287  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:19.375567  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:21.874303  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:24.374724  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:26.874201  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:28.875139  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:30.875913  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:32.876533  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:35.374093  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:37.374317  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:39.873972  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:41.874678  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:44.374313  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:46.374843  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:48.375268  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:50.874442  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:52.874670  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:54.876042  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:57.374242  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:02:59.374764  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:01.375629  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:03.874090  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:05.874933  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:07.874988  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:10.375278  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:12.875217  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:15.374125  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:17.374601  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:19.874402  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:21.874761  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:24.373999  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:26.374333  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:28.374800  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:30.375182  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:32.874199  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:34.875038  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:37.374178  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:39.374897  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:41.376724  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:43.875074  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:45.875991  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:48.374682  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:50.374756  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:52.874361  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:54.874691  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:57.375643  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:03:59.874852  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:02.374714  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:04.874203  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:07.375099  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:09.874992  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:12.375032  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:14.874592  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:17.374337  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:19.375719  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:21.874855  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:23.875005  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:26.374357  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:28.874350  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:31.374814  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:33.375229  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:35.376366  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:37.875161  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:40.374398  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:42.375093  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:44.375288  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:46.874677  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:49.374853  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:51.874402  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:53.874728  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:56.374314  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:04:58.374922  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:00.398713  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:02.874327  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:04.875407  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:07.374991  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:09.375065  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:11.874375  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:13.875021  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:15.875906  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:18.374204  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:20.375019  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:22.874356  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:24.874622  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:26.874889  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:29.374262  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:31.375054  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:33.408848  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:35.874199  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:37.874785  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:39.875878  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:42.374064  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:44.374403  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:46.874583  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:49.375025  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:51.875263  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:54.374635  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:56.374838  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:05:58.874718  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:01.374046  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:03.874734  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:06.374348  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:08.874846  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:10.875133  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:13.373809  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:15.374383  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:17.374643  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:19.375329  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:21.874529  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:23.874845  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:26.374245  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:28.874069  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:30.874264  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:32.874477  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:35.374326  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:37.874249  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:39.874482  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:41.875383  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:44.374077  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:46.374372  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:48.874600  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:50.874741  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:53.375464  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:55.875061  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:06:58.374676  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:00.377657  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:02.384684  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:04.874707  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:06.875283  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:09.374694  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:11.874370  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:14.375095  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	W1016 19:07:16.874880  337340 node_ready.go:57] node "ha-556988-m03" has "Ready":"Unknown" status (will retry)
	I1016 19:07:18.870877  337340 node_ready.go:38] duration metric: took 6m0.000146858s for node "ha-556988-m03" to be "Ready" ...
	I1016 19:07:18.873970  337340 out.go:203] 
	W1016 19:07:18.876680  337340 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1016 19:07:18.876697  337340 out.go:285] * 
	W1016 19:07:18.878873  337340 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 19:07:18.881589  337340 out.go:203] 
	
	
	==> CRI-O <==
	Oct 16 18:59:36 ha-556988 crio[667]: time="2025-10-16T18:59:36.033008604Z" level=info msg="Started container" PID=1192 containerID=668681e0d58e70e2edf23bedf32d99282f6a8c38b0aad26000be1021582b8b56 description=default/busybox-7b57f96db7-8m2wv/busybox id=e73f877a-ee31-407d-ac4c-a34a4abcd363 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b5419232b288e867bd15afc6e090129eb958d9e64a346ef88df56d1130e998f
	Oct 16 19:00:06 ha-556988 conmon[1141]: conmon ee0dc742d47b892b93ac <ninfo>: container 1150 exited with status 1
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.415993438Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=75767156-3fb6-42b4-95e2-d34aa2a5bea8 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.41793089Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8b5f67f6-e1d4-4af2-88c2-48fa40df96aa name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.419946292Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=58e00405-99c8-449e-a3ad-5392da1ae41a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.42034836Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.428022662Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.428394313Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3db4041b6d3bc223822867a19715c3e66ed2c364c6b3187c2a59cc7adbe12ade/merged/etc/passwd: no such file or directory"
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.428502664Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3db4041b6d3bc223822867a19715c3e66ed2c364c6b3187c2a59cc7adbe12ade/merged/etc/group: no such file or directory"
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.431213384Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.460374693Z" level=info msg="Created container e24f8a6878f298558b57ff3af4fc74fbb0b1169f9fd531dd73d4e9fdb9db8ec3: kube-system/storage-provisioner/storage-provisioner" id=58e00405-99c8-449e-a3ad-5392da1ae41a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.469592921Z" level=info msg="Starting container: e24f8a6878f298558b57ff3af4fc74fbb0b1169f9fd531dd73d4e9fdb9db8ec3" id=2b8bafce-4d00-4a8d-8c2a-a4b19468c0be name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:00:06 ha-556988 crio[667]: time="2025-10-16T19:00:06.472182538Z" level=info msg="Started container" PID=1395 containerID=e24f8a6878f298558b57ff3af4fc74fbb0b1169f9fd531dd73d4e9fdb9db8ec3 description=kube-system/storage-provisioner/storage-provisioner id=2b8bafce-4d00-4a8d-8c2a-a4b19468c0be name=/runtime.v1.RuntimeService/StartContainer sandboxID=3100d564efc4cf0ded67a741f8ebf6a46eeb48236dd12f0b244aa7eb0e1041e1
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.166222167Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.169795977Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.16983204Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.169854342Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.173639915Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.173676863Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.173701159Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.176974688Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.177010775Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.177034324Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.180287168Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:00:16 ha-556988 crio[667]: time="2025-10-16T19:00:16.180322968Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	e24f8a6878f29       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       2                   3100d564efc4c       storage-provisioner                 kube-system
	668681e0d58e7       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   3b5419232b288       busybox-7b57f96db7-8m2wv            default
	ee0dc742d47b8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       1                   3100d564efc4c       storage-provisioner                 kube-system
	d2ef4f1c6fd3d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   f62a65ca971ca       coredns-66bc5c9577-bg5gf            kube-system
	fa4be697bf069       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   9a193d0046bea       kindnet-c5vhh                       kube-system
	9f54a6f37bdff       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   219f0f758c58e       coredns-66bc5c9577-qnwbz            kube-system
	676cc3096c2c4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   7 minutes ago       Running             kube-controller-manager   2                   2f36988f94206       kube-controller-manager-ha-556988   kube-system
	66e732aebd424       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   2bc6a25bda869       kube-proxy-l2lf6                    kube-system
	a6a97464c4b58       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  0                   d09d9e3f4595d       kube-vip-ha-556988                  kube-system
	37de0677d0291       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Running             kube-apiserver            1                   ff19c20039a2e       kube-apiserver-ha-556988            kube-system
	13005c03c7e83       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Exited              kube-controller-manager   1                   2f36988f94206       kube-controller-manager-ha-556988   kube-system
	ccd1663977e23       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   16edb5468bfd8       etcd-ha-556988                      kube-system
	0947527fb7c66       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   9953eab01a12a       kube-scheduler-ha-556988            kube-system
	
	
	==> coredns [9f54a6f37bdffe68140f1859804fc0edaf64ea559a101f6caf876000479c9ee1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60434 - 54918 "HINFO IN 3143784560746213008.1236521785684304278. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01077593s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d2ef4f1c6fd3dddc27aea4bdc4cf4ce1714f112fa6b015df816ae128c747014c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37299 - 23942 "HINFO IN 3089919825197669795.1270930252494634912. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013048437s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-556988
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-556988
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=ha-556988
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_53_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:53:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-556988
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:07:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:05:32 +0000   Thu, 16 Oct 2025 18:53:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:05:32 +0000   Thu, 16 Oct 2025 18:53:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:05:32 +0000   Thu, 16 Oct 2025 18:53:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 19:05:32 +0000   Thu, 16 Oct 2025 18:59:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-556988
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                b59e7c71-f015-4beb-a0b1-1db2d92a9291
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-8m2wv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-bg5gf             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-66bc5c9577-qnwbz             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-556988                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-c5vhh                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-556988             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-556988    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-l2lf6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-556988             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-556988                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m57s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m55s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x9 over 13m)      kubelet          Node ha-556988 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-556988 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-556988 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-556988 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-556988 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-556988 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           13m                    node-controller  Node ha-556988 event: Registered Node ha-556988 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-556988 event: Registered Node ha-556988 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-556988 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-556988 event: Registered Node ha-556988 in Controller
	  Normal   RegisteredNode           9m                     node-controller  Node ha-556988 event: Registered Node ha-556988 in Controller
	  Normal   NodeHasSufficientMemory  8m33s (x8 over 8m33s)  kubelet          Node ha-556988 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m33s (x8 over 8m33s)  kubelet          Node ha-556988 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m33s (x8 over 8m33s)  kubelet          Node ha-556988 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m53s                  node-controller  Node ha-556988 event: Registered Node ha-556988 in Controller
	  Normal   RegisteredNode           7m49s                  node-controller  Node ha-556988 event: Registered Node ha-556988 in Controller
	
	
	Name:               ha-556988-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-556988-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=ha-556988
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_16T18_54_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:54:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-556988-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:07:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:07:17 +0000   Thu, 16 Oct 2025 18:58:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:07:17 +0000   Thu, 16 Oct 2025 18:58:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:07:17 +0000   Thu, 16 Oct 2025 18:58:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 19:07:17 +0000   Thu, 16 Oct 2025 18:58:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-556988-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                7a9bc276-8208-4c5e-a8a7-151b962ba6f2
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-g6s82                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-556988-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-9mrmf                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-556988-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-556988-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-mx9hc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-556988-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-556988-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 7m28s                  kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-556988-m02 event: Registered Node ha-556988-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-556988-m02 event: Registered Node ha-556988-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-556988-m02 event: Registered Node ha-556988-m02 in Controller
	  Warning  CgroupV1                 9m38s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 9m38s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     9m37s (x8 over 9m38s)  kubelet          Node ha-556988-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  9m37s (x8 over 9m38s)  kubelet          Node ha-556988-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m37s (x8 over 9m38s)  kubelet          Node ha-556988-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeNotReady             9m11s                  node-controller  Node ha-556988-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           9m                     node-controller  Node ha-556988-m02 event: Registered Node ha-556988-m02 in Controller
	  Normal   Starting                 8m30s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m29s (x8 over 8m29s)  kubelet          Node ha-556988-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m29s (x8 over 8m29s)  kubelet          Node ha-556988-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m29s (x8 over 8m29s)  kubelet          Node ha-556988-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m53s                  node-controller  Node ha-556988-m02 event: Registered Node ha-556988-m02 in Controller
	  Normal   RegisteredNode           7m49s                  node-controller  Node ha-556988-m02 event: Registered Node ha-556988-m02 in Controller
	
	
	Name:               ha-556988-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-556988-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=ha-556988
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_16T18_56_35_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:56:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-556988-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:58:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 16 Oct 2025 18:57:16 +0000   Thu, 16 Oct 2025 19:00:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 16 Oct 2025 18:57:16 +0000   Thu, 16 Oct 2025 19:00:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 16 Oct 2025 18:57:16 +0000   Thu, 16 Oct 2025 19:00:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 16 Oct 2025 18:57:16 +0000   Thu, 16 Oct 2025 19:00:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-556988-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                3974a7c6-147c-48e8-b522-87d967a9ed5f
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-flq9x       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-2j2kg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-556988-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-556988-m04 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-556988-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                node-controller  Node ha-556988-m04 event: Registered Node ha-556988-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-556988-m04 event: Registered Node ha-556988-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-556988-m04 event: Registered Node ha-556988-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-556988-m04 status is now: NodeReady
	  Normal   RegisteredNode           9m                 node-controller  Node ha-556988-m04 event: Registered Node ha-556988-m04 in Controller
	  Normal   RegisteredNode           7m53s              node-controller  Node ha-556988-m04 event: Registered Node ha-556988-m04 in Controller
	  Normal   RegisteredNode           7m49s              node-controller  Node ha-556988-m04 event: Registered Node ha-556988-m04 in Controller
	  Normal   NodeNotReady             7m3s               node-controller  Node ha-556988-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.510048] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035217] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.777829] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.353148] kauditd_printk_skb: 36 callbacks suppressed
	[Oct16 17:39] FS-Cache: Duplicate cookie detected
	[  +0.000746] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001056] FS-Cache: O-cookie d=00000000a1708097{9P.session} n=00000000c48db394
	[  +0.001150] FS-Cache: O-key=[10] '34323935323233313231'
	[  +0.000794] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000971] FS-Cache: N-cookie d=00000000a1708097{9P.session} n=0000000008f2874d
	[  +0.001104] FS-Cache: N-key=[10] '34323935323233313231'
	[Oct16 17:40] hrtimer: interrupt took 46683506 ns
	[Oct16 18:30] kauditd_printk_skb: 8 callbacks suppressed
	[Oct16 18:32] overlayfs: idmapped layers are currently not supported
	[  +0.067059] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct16 18:38] overlayfs: idmapped layers are currently not supported
	[Oct16 18:39] overlayfs: idmapped layers are currently not supported
	[Oct16 18:53] overlayfs: idmapped layers are currently not supported
	[Oct16 18:54] overlayfs: idmapped layers are currently not supported
	[Oct16 18:55] overlayfs: idmapped layers are currently not supported
	[Oct16 18:56] overlayfs: idmapped layers are currently not supported
	[Oct16 18:57] overlayfs: idmapped layers are currently not supported
	[Oct16 18:58] overlayfs: idmapped layers are currently not supported
	[Oct16 18:59] overlayfs: idmapped layers are currently not supported
	[ +38.025144] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ccd1663977e230bbda3cae69e035a19bb725c3f88efd4340e2acdb82e35b17b4] <==
	{"level":"info","ts":"2025-10-16T19:01:17.904633Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"info","ts":"2025-10-16T19:01:17.939403Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"info","ts":"2025-10-16T19:01:17.945273Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"warn","ts":"2025-10-16T19:07:22.986850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:33184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:07:23.040670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:33206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:07:23.066338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:33214","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-16T19:07:23.096019Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(6591995946286876817 12593026477526642892)"}
	{"level":"info","ts":"2025-10-16T19:07:23.098104Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"dd9f3debc3328b7e","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-10-16T19:07:23.098168Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"warn","ts":"2025-10-16T19:07:23.098451Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"info","ts":"2025-10-16T19:07:23.098481Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"warn","ts":"2025-10-16T19:07:23.098727Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"info","ts":"2025-10-16T19:07:23.098964Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"warn","ts":"2025-10-16T19:07:23.098803Z","caller":"etcdserver/server.go:718","msg":"rejected Raft message from removed member","local-member-id":"aec36adc501070cc","removed-member-id":"dd9f3debc3328b7e"}
	{"level":"warn","ts":"2025-10-16T19:07:23.099083Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2025-10-16T19:07:23.099062Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"warn","ts":"2025-10-16T19:07:23.099359Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"dd9f3debc3328b7e","error":"context canceled"}
	{"level":"warn","ts":"2025-10-16T19:07:23.099448Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"dd9f3debc3328b7e","error":"failed to read dd9f3debc3328b7e on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-10-16T19:07:23.099492Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"warn","ts":"2025-10-16T19:07:23.099641Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"dd9f3debc3328b7e","error":"context canceled"}
	{"level":"info","ts":"2025-10-16T19:07:23.099700Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"info","ts":"2025-10-16T19:07:23.099747Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"info","ts":"2025-10-16T19:07:23.099811Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"dd9f3debc3328b7e"}
	{"level":"warn","ts":"2025-10-16T19:07:23.147994Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"dd9f3debc3328b7e"}
	{"level":"warn","ts":"2025-10-16T19:07:23.148642Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"dd9f3debc3328b7e"}
	
	
	==> kernel <==
	 19:07:32 up  1:49,  0 user,  load average: 0.62, 1.03, 1.51
	Linux ha-556988 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fa4be697bf0693026672a5f6c9fe73e79415080f58163a0e09e3473403170716] <==
	I1016 19:06:56.160406       1 main.go:324] Node ha-556988-m02 has CIDR [10.244.1.0/24] 
	I1016 19:06:56.160461       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1016 19:06:56.160473       1 main.go:324] Node ha-556988-m03 has CIDR [10.244.2.0/24] 
	I1016 19:07:06.166319       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 19:07:06.166355       1 main.go:301] handling current node
	I1016 19:07:06.166371       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1016 19:07:06.166377       1 main.go:324] Node ha-556988-m02 has CIDR [10.244.1.0/24] 
	I1016 19:07:06.166532       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1016 19:07:06.166546       1 main.go:324] Node ha-556988-m03 has CIDR [10.244.2.0/24] 
	I1016 19:07:06.166618       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1016 19:07:06.166629       1 main.go:324] Node ha-556988-m04 has CIDR [10.244.3.0/24] 
	I1016 19:07:16.159868       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1016 19:07:16.159905       1 main.go:324] Node ha-556988-m04 has CIDR [10.244.3.0/24] 
	I1016 19:07:16.160092       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 19:07:16.160107       1 main.go:301] handling current node
	I1016 19:07:16.160120       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1016 19:07:16.160126       1 main.go:324] Node ha-556988-m02 has CIDR [10.244.1.0/24] 
	I1016 19:07:16.160187       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1016 19:07:16.160200       1 main.go:324] Node ha-556988-m03 has CIDR [10.244.2.0/24] 
	I1016 19:07:26.160029       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 19:07:26.160063       1 main.go:301] handling current node
	I1016 19:07:26.160079       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1016 19:07:26.160084       1 main.go:324] Node ha-556988-m02 has CIDR [10.244.1.0/24] 
	I1016 19:07:26.160318       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1016 19:07:26.160339       1 main.go:324] Node ha-556988-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [37de0677d02917c07b70727749f73f2b0b33bfa000e9e137a54da309d14e7ae7] <==
	I1016 18:59:34.894194       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1016 18:59:34.896075       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1016 18:59:34.896820       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3 192.168.49.4]
	I1016 18:59:34.911822       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1016 18:59:34.911849       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1016 18:59:34.920290       1 cache.go:39] Caches are synced for autoregister controller
	I1016 18:59:34.943461       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1016 18:59:34.950382       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1016 18:59:34.950416       1 policy_source.go:240] refreshing policies
	I1016 18:59:34.957319       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 18:59:34.959365       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1016 18:59:34.965217       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 18:59:34.965371       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1016 18:59:34.971502       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1016 18:59:35.000033       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 18:59:35.031357       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1016 18:59:35.038221       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1016 18:59:35.053357       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 18:59:37.014352       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1016 18:59:37.014434       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	W1016 18:59:38.259757       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3 192.168.49.4]
	I1016 18:59:40.018709       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 18:59:40.263262       1 controller.go:667] quota admission added evaluator for: deployments.apps
	W1016 18:59:58.250950       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1016 19:00:04.488288       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [13005c03c7e831233e329dc3df5f63331cf23a4ab71c78d67d200baaff30b9bf] <==
	I1016 18:59:02.476495       1 serving.go:386] Generated self-signed cert in-memory
	I1016 18:59:04.091611       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1016 18:59:04.091720       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:59:04.093637       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1016 18:59:04.094321       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1016 18:59:04.094476       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 18:59:04.094572       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1016 18:59:20.022685       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [676cc3096c2c428c05ab34bcbe56aece39203ffe11f9216bd113fe47eebe8d46] <==
	I1016 18:59:39.953791       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-556988-m03"
	I1016 18:59:39.955638       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1016 18:59:39.954135       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-556988-m04"
	I1016 18:59:39.955915       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1016 18:59:39.956300       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1016 18:59:39.956794       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1016 18:59:39.958437       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1016 18:59:39.958540       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1016 18:59:39.958616       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1016 18:59:39.958670       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1016 18:59:39.958704       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1016 18:59:39.958656       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1016 18:59:39.964429       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 18:59:39.964622       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1016 18:59:39.970819       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1016 18:59:39.972126       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1016 18:59:39.973735       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1016 18:59:39.980202       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 18:59:39.983741       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 18:59:39.983826       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 18:59:39.983857       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 18:59:39.984304       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1016 18:59:39.988621       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:05:33.126031       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-zdc2h"
	E1016 19:05:33.383262       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-proxy [66e732aebd424e1c2b5fe5fa62678b4f60db51b175af2e4bdf9c05d13a3604b1] <==
	I1016 18:59:36.431382       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:59:37.074112       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:59:37.404317       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:59:37.420237       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1016 18:59:37.440936       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:59:37.547567       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:59:37.547677       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:59:37.566424       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:59:37.566839       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:59:37.567055       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:59:37.568313       1 config.go:200] "Starting service config controller"
	I1016 18:59:37.569180       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:59:37.569272       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:59:37.569301       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:59:37.569349       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:59:37.569432       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:59:37.570116       1 config.go:309] "Starting node config controller"
	I1016 18:59:37.593325       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:59:37.593349       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:59:37.670251       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 18:59:37.670355       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 18:59:37.670385       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0947527fb7c6600575f80d864636e177c1330efa7ab3caff116116cd0d07fe91] <==
	E1016 18:59:19.210127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 18:59:20.223711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 18:59:20.272552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:59:20.286900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 18:59:21.024708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1016 18:59:23.850262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 18:59:25.366156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 18:59:25.440106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 18:59:25.455207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 18:59:25.526976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 18:59:25.693902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 18:59:25.715863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 18:59:26.150506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 18:59:26.525981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 18:59:27.199538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 18:59:27.780409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 18:59:28.329859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 18:59:28.766926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 18:59:29.490851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 18:59:29.827336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:59:30.023162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 18:59:30.629590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 18:59:31.265247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 18:59:33.627332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1016 18:59:46.572262       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.941000     803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b32400f6-5ec6-4a22-87fc-4b9fb8b25976-lib-modules\") pod \"kube-proxy-l2lf6\" (UID: \"b32400f6-5ec6-4a22-87fc-4b9fb8b25976\") " pod="kube-system/kube-proxy-l2lf6"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.941076     803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b32400f6-5ec6-4a22-87fc-4b9fb8b25976-xtables-lock\") pod \"kube-proxy-l2lf6\" (UID: \"b32400f6-5ec6-4a22-87fc-4b9fb8b25976\") " pod="kube-system/kube-proxy-l2lf6"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.941166     803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aadf11dc-a51d-4828-9ae1-0295e92d1c95-xtables-lock\") pod \"kindnet-c5vhh\" (UID: \"aadf11dc-a51d-4828-9ae1-0295e92d1c95\") " pod="kube-system/kindnet-c5vhh"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.941256     803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aadf11dc-a51d-4828-9ae1-0295e92d1c95-lib-modules\") pod \"kindnet-c5vhh\" (UID: \"aadf11dc-a51d-4828-9ae1-0295e92d1c95\") " pod="kube-system/kindnet-c5vhh"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.941277     803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/916b69a5-8ee0-43ee-87fd-9a88caebbec8-tmp\") pod \"storage-provisioner\" (UID: \"916b69a5-8ee0-43ee-87fd-9a88caebbec8\") " pod="kube-system/storage-provisioner"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.941319     803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/aadf11dc-a51d-4828-9ae1-0295e92d1c95-cni-cfg\") pod \"kindnet-c5vhh\" (UID: \"aadf11dc-a51d-4828-9ae1-0295e92d1c95\") " pod="kube-system/kindnet-c5vhh"
	Oct 16 18:59:34 ha-556988 kubelet[803]: E1016 18:59:34.964270     803 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-vip-ha-556988\" already exists" pod="kube-system/kube-vip-ha-556988"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.964316     803 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-ha-556988"
	Oct 16 18:59:34 ha-556988 kubelet[803]: E1016 18:59:34.976099     803 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-556988\" already exists" pod="kube-system/etcd-ha-556988"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.976140     803 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-556988"
	Oct 16 18:59:34 ha-556988 kubelet[803]: E1016 18:59:34.987350     803 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-556988\" already exists" pod="kube-system/kube-apiserver-ha-556988"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.987392     803 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-556988"
	Oct 16 18:59:34 ha-556988 kubelet[803]: I1016 18:59:34.999523     803 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 16 18:59:35 ha-556988 kubelet[803]: E1016 18:59:35.015087     803 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-556988\" already exists" pod="kube-system/kube-controller-manager-ha-556988"
	Oct 16 18:59:35 ha-556988 kubelet[803]: I1016 18:59:35.039384     803 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-556988"
	Oct 16 18:59:35 ha-556988 kubelet[803]: I1016 18:59:35.039591     803 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-556988"
	Oct 16 18:59:35 ha-556988 kubelet[803]: I1016 18:59:35.064156     803 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 16 18:59:35 ha-556988 kubelet[803]: I1016 18:59:35.176886     803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-556988" podStartSLOduration=0.17686523 podStartE2EDuration="176.86523ms" podCreationTimestamp="2025-10-16 18:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:59:35.14812186 +0000 UTC m=+36.282512675" watchObservedRunningTime="2025-10-16 18:59:35.17686523 +0000 UTC m=+36.311256037"
	Oct 16 18:59:35 ha-556988 kubelet[803]: I1016 18:59:35.286741     803 scope.go:117] "RemoveContainer" containerID="13005c03c7e831233e329dc3df5f63331cf23a4ab71c78d67d200baaff30b9bf"
	Oct 16 18:59:35 ha-556988 kubelet[803]: W1016 18:59:35.357678     803 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/crio-9a193d0046bea11d1febf065e134855191406dfa3aec11b726dd228067189c7b WatchSource:0}: Error finding container 9a193d0046bea11d1febf065e134855191406dfa3aec11b726dd228067189c7b: Status 404 returned error can't find the container with id 9a193d0046bea11d1febf065e134855191406dfa3aec11b726dd228067189c7b
	Oct 16 18:59:35 ha-556988 kubelet[803]: W1016 18:59:35.401613     803 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/crio-219f0f758c58e5e2e91f77c7c3e14e6652dec28447814307cca604d39430e73a WatchSource:0}: Error finding container 219f0f758c58e5e2e91f77c7c3e14e6652dec28447814307cca604d39430e73a: Status 404 returned error can't find the container with id 219f0f758c58e5e2e91f77c7c3e14e6652dec28447814307cca604d39430e73a
	Oct 16 18:59:35 ha-556988 kubelet[803]: W1016 18:59:35.717419     803 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/crio-3b5419232b288e867bd15afc6e090129eb958d9e64a346ef88df56d1130e998f WatchSource:0}: Error finding container 3b5419232b288e867bd15afc6e090129eb958d9e64a346ef88df56d1130e998f: Status 404 returned error can't find the container with id 3b5419232b288e867bd15afc6e090129eb958d9e64a346ef88df56d1130e998f
	Oct 16 18:59:59 ha-556988 kubelet[803]: E1016 18:59:59.007146     803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df58f74bd2685b895cef12ab16695cf6c46768695a2a39165b75d08991392dd9\": container with ID starting with df58f74bd2685b895cef12ab16695cf6c46768695a2a39165b75d08991392dd9 not found: ID does not exist" containerID="df58f74bd2685b895cef12ab16695cf6c46768695a2a39165b75d08991392dd9"
	Oct 16 18:59:59 ha-556988 kubelet[803]: I1016 18:59:59.007669     803 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="df58f74bd2685b895cef12ab16695cf6c46768695a2a39165b75d08991392dd9" err="rpc error: code = NotFound desc = could not find container \"df58f74bd2685b895cef12ab16695cf6c46768695a2a39165b75d08991392dd9\": container with ID starting with df58f74bd2685b895cef12ab16695cf6c46768695a2a39165b75d08991392dd9 not found: ID does not exist"
	Oct 16 19:00:06 ha-556988 kubelet[803]: I1016 19:00:06.414711     803 scope.go:117] "RemoveContainer" containerID="ee0dc742d47b892b93aca268c637f4c52645442b0c386d0be82fcedaaa23bc41"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-556988 -n ha-556988
helpers_test.go:269: (dbg) Run:  kubectl --context ha-556988 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-d75ps
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-556988 describe pod busybox-7b57f96db7-d75ps
helpers_test.go:290: (dbg) kubectl --context ha-556988 describe pod busybox-7b57f96db7-d75ps:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-d75ps
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jzmh8 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-jzmh8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  2m    default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  11s   default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  2m    default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  11s   default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.17s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.95s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-716057 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-716057 --output=json --user=testUser: exit status 80 (1.945480103s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b4350a10-b86a-4b84-812e-7ceea69fc3cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-716057 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"0cea4c9f-35d9-4589-a4ae-497a41598dcf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-16T19:12:05Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"c0826494-ed90-4c6a-a97b-61a340cb137d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-716057 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.95s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.89s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-716057 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-716057 --output=json --user=testUser: exit status 80 (1.891452241s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ff70ca28-c8dc-43b9-8c61-e247345020a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-716057 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"5d22cf1b-6e15-4ada-9045-7dc46f2bc08a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-16T19:12:07Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"59a8716b-af20-4d88-9ba5-081a9f2d37f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-716057 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.89s)

                                                
                                    
x
+
TestPause/serial/Pause (7.87s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-870778 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-870778 --alsologtostderr -v=5: exit status 80 (2.487615979s)

                                                
                                                
-- stdout --
	* Pausing node pause-870778 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 19:36:12.963832  452210 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:36:12.964783  452210 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:36:12.964827  452210 out.go:374] Setting ErrFile to fd 2...
	I1016 19:36:12.964848  452210 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:36:12.965220  452210 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:36:12.965552  452210 out.go:368] Setting JSON to false
	I1016 19:36:12.965607  452210 mustload.go:65] Loading cluster: pause-870778
	I1016 19:36:12.966131  452210 config.go:182] Loaded profile config "pause-870778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:36:12.966658  452210 cli_runner.go:164] Run: docker container inspect pause-870778 --format={{.State.Status}}
	I1016 19:36:12.986222  452210 host.go:66] Checking if "pause-870778" exists ...
	I1016 19:36:12.986547  452210 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:36:13.046822  452210 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-16 19:36:13.036248196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:36:13.047530  452210 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-870778 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1016 19:36:13.050689  452210 out.go:179] * Pausing node pause-870778 ... 
	I1016 19:36:13.054469  452210 host.go:66] Checking if "pause-870778" exists ...
	I1016 19:36:13.054807  452210 ssh_runner.go:195] Run: systemctl --version
	I1016 19:36:13.054863  452210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-870778
	I1016 19:36:13.075085  452210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/pause-870778/id_rsa Username:docker}
	I1016 19:36:13.179873  452210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:36:13.192907  452210 pause.go:52] kubelet running: true
	I1016 19:36:13.192974  452210 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 19:36:13.440320  452210 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 19:36:13.440417  452210 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 19:36:13.509689  452210 cri.go:89] found id: "f1db270c13de780f6a205b6b4d186670276f05adeada73398e9bd6b30fd41e6a"
	I1016 19:36:13.509713  452210 cri.go:89] found id: "4059e38191b26fe0e8a6fae7b8b3aa08c4fb288de2fed7b7b8c1d56b2fdf6ff0"
	I1016 19:36:13.509719  452210 cri.go:89] found id: "cb7ea64c57a6b2d64ce9ec1cd5c5305bb5160b5d51cdc02f56727cd3bc062e9f"
	I1016 19:36:13.509723  452210 cri.go:89] found id: "9a54382d4bc8648852c26609dc83acf27dc0010c1d0d9f18fb11f136c720bd41"
	I1016 19:36:13.509726  452210 cri.go:89] found id: "ae50feda76840007009f20128d2985fc95c60eb2bd7543095ac670363b69844c"
	I1016 19:36:13.509730  452210 cri.go:89] found id: "05bbf102a21139c3005b2c4c4c00ba00d6bd04b54f8f16436a691c6a2bde8b9e"
	I1016 19:36:13.509734  452210 cri.go:89] found id: "7abef405427407987cdfc0d38c0f1eb915e50be06735d2c7f67e3abb3b179695"
	I1016 19:36:13.509737  452210 cri.go:89] found id: "453a3e3ee78d58a74340babd2fbcac7b8e92bac974c0a00fe84180b09fcc04a5"
	I1016 19:36:13.509741  452210 cri.go:89] found id: "36bd434b7df4ff2386447f12fc15907a45580613a54171383ed220631e0a295b"
	I1016 19:36:13.509752  452210 cri.go:89] found id: "998613c05e7f15a32fb55e0bc139d53f8fefc8dfe93ddf08bb1d48367009bc13"
	I1016 19:36:13.509757  452210 cri.go:89] found id: "3b392ff5a2e8ee87e2387c57764ba62d125a51fdbb71404ec83edbfb827243a0"
	I1016 19:36:13.509760  452210 cri.go:89] found id: "1fa43c29e504499b5777d8f02c5cdedd9d2cdae2c7b82bcc937a07f2ae00ef16"
	I1016 19:36:13.509763  452210 cri.go:89] found id: "78a959960479c52d4c849b6fa6022c2f23f915fb8f47d0dee2a3b13fbbd7af18"
	I1016 19:36:13.509767  452210 cri.go:89] found id: "976c969aa054f5536aeb2a392d0c178628ec9360569108fed110f8fd94bef670"
	I1016 19:36:13.509770  452210 cri.go:89] found id: "7832a0d4d815d359d4874d18cd9c787088b0d8413ffd5918609a48296d38084e"
	I1016 19:36:13.509778  452210 cri.go:89] found id: "6a93a6454e89deb75178b63bcad9e421253c8cf3ad8cd95dee098c421b8dd117"
	I1016 19:36:13.509785  452210 cri.go:89] found id: ""
	I1016 19:36:13.509838  452210 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 19:36:13.521336  452210 retry.go:31] will retry after 206.631179ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:36:13Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:36:13.728881  452210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:36:13.742794  452210 pause.go:52] kubelet running: false
	I1016 19:36:13.742872  452210 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 19:36:13.922078  452210 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 19:36:13.922176  452210 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 19:36:13.999841  452210 cri.go:89] found id: "f1db270c13de780f6a205b6b4d186670276f05adeada73398e9bd6b30fd41e6a"
	I1016 19:36:13.999866  452210 cri.go:89] found id: "4059e38191b26fe0e8a6fae7b8b3aa08c4fb288de2fed7b7b8c1d56b2fdf6ff0"
	I1016 19:36:13.999873  452210 cri.go:89] found id: "cb7ea64c57a6b2d64ce9ec1cd5c5305bb5160b5d51cdc02f56727cd3bc062e9f"
	I1016 19:36:13.999877  452210 cri.go:89] found id: "9a54382d4bc8648852c26609dc83acf27dc0010c1d0d9f18fb11f136c720bd41"
	I1016 19:36:13.999881  452210 cri.go:89] found id: "ae50feda76840007009f20128d2985fc95c60eb2bd7543095ac670363b69844c"
	I1016 19:36:13.999885  452210 cri.go:89] found id: "05bbf102a21139c3005b2c4c4c00ba00d6bd04b54f8f16436a691c6a2bde8b9e"
	I1016 19:36:13.999888  452210 cri.go:89] found id: "7abef405427407987cdfc0d38c0f1eb915e50be06735d2c7f67e3abb3b179695"
	I1016 19:36:13.999891  452210 cri.go:89] found id: "453a3e3ee78d58a74340babd2fbcac7b8e92bac974c0a00fe84180b09fcc04a5"
	I1016 19:36:13.999894  452210 cri.go:89] found id: "36bd434b7df4ff2386447f12fc15907a45580613a54171383ed220631e0a295b"
	I1016 19:36:13.999914  452210 cri.go:89] found id: "998613c05e7f15a32fb55e0bc139d53f8fefc8dfe93ddf08bb1d48367009bc13"
	I1016 19:36:13.999922  452210 cri.go:89] found id: "3b392ff5a2e8ee87e2387c57764ba62d125a51fdbb71404ec83edbfb827243a0"
	I1016 19:36:13.999933  452210 cri.go:89] found id: "1fa43c29e504499b5777d8f02c5cdedd9d2cdae2c7b82bcc937a07f2ae00ef16"
	I1016 19:36:13.999937  452210 cri.go:89] found id: "78a959960479c52d4c849b6fa6022c2f23f915fb8f47d0dee2a3b13fbbd7af18"
	I1016 19:36:13.999940  452210 cri.go:89] found id: "976c969aa054f5536aeb2a392d0c178628ec9360569108fed110f8fd94bef670"
	I1016 19:36:13.999943  452210 cri.go:89] found id: "7832a0d4d815d359d4874d18cd9c787088b0d8413ffd5918609a48296d38084e"
	I1016 19:36:13.999949  452210 cri.go:89] found id: "6a93a6454e89deb75178b63bcad9e421253c8cf3ad8cd95dee098c421b8dd117"
	I1016 19:36:13.999956  452210 cri.go:89] found id: ""
	I1016 19:36:14.000009  452210 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 19:36:14.016891  452210 retry.go:31] will retry after 379.499367ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:36:14Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:36:14.397497  452210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:36:14.410929  452210 pause.go:52] kubelet running: false
	I1016 19:36:14.411010  452210 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 19:36:14.567768  452210 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 19:36:14.567857  452210 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 19:36:14.633356  452210 cri.go:89] found id: "f1db270c13de780f6a205b6b4d186670276f05adeada73398e9bd6b30fd41e6a"
	I1016 19:36:14.633380  452210 cri.go:89] found id: "4059e38191b26fe0e8a6fae7b8b3aa08c4fb288de2fed7b7b8c1d56b2fdf6ff0"
	I1016 19:36:14.633386  452210 cri.go:89] found id: "cb7ea64c57a6b2d64ce9ec1cd5c5305bb5160b5d51cdc02f56727cd3bc062e9f"
	I1016 19:36:14.633390  452210 cri.go:89] found id: "9a54382d4bc8648852c26609dc83acf27dc0010c1d0d9f18fb11f136c720bd41"
	I1016 19:36:14.633410  452210 cri.go:89] found id: "ae50feda76840007009f20128d2985fc95c60eb2bd7543095ac670363b69844c"
	I1016 19:36:14.633415  452210 cri.go:89] found id: "05bbf102a21139c3005b2c4c4c00ba00d6bd04b54f8f16436a691c6a2bde8b9e"
	I1016 19:36:14.633419  452210 cri.go:89] found id: "7abef405427407987cdfc0d38c0f1eb915e50be06735d2c7f67e3abb3b179695"
	I1016 19:36:14.633423  452210 cri.go:89] found id: "453a3e3ee78d58a74340babd2fbcac7b8e92bac974c0a00fe84180b09fcc04a5"
	I1016 19:36:14.633426  452210 cri.go:89] found id: "36bd434b7df4ff2386447f12fc15907a45580613a54171383ed220631e0a295b"
	I1016 19:36:14.633434  452210 cri.go:89] found id: "998613c05e7f15a32fb55e0bc139d53f8fefc8dfe93ddf08bb1d48367009bc13"
	I1016 19:36:14.633438  452210 cri.go:89] found id: "3b392ff5a2e8ee87e2387c57764ba62d125a51fdbb71404ec83edbfb827243a0"
	I1016 19:36:14.633441  452210 cri.go:89] found id: "1fa43c29e504499b5777d8f02c5cdedd9d2cdae2c7b82bcc937a07f2ae00ef16"
	I1016 19:36:14.633444  452210 cri.go:89] found id: "78a959960479c52d4c849b6fa6022c2f23f915fb8f47d0dee2a3b13fbbd7af18"
	I1016 19:36:14.633447  452210 cri.go:89] found id: "976c969aa054f5536aeb2a392d0c178628ec9360569108fed110f8fd94bef670"
	I1016 19:36:14.633454  452210 cri.go:89] found id: "7832a0d4d815d359d4874d18cd9c787088b0d8413ffd5918609a48296d38084e"
	I1016 19:36:14.633460  452210 cri.go:89] found id: "6a93a6454e89deb75178b63bcad9e421253c8cf3ad8cd95dee098c421b8dd117"
	I1016 19:36:14.633463  452210 cri.go:89] found id: ""
	I1016 19:36:14.633541  452210 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 19:36:14.644421  452210 retry.go:31] will retry after 337.639763ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:36:14Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:36:14.983007  452210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:36:14.997546  452210 pause.go:52] kubelet running: false
	I1016 19:36:14.997609  452210 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 19:36:15.226605  452210 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 19:36:15.226741  452210 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 19:36:15.348449  452210 cri.go:89] found id: "f1db270c13de780f6a205b6b4d186670276f05adeada73398e9bd6b30fd41e6a"
	I1016 19:36:15.348474  452210 cri.go:89] found id: "4059e38191b26fe0e8a6fae7b8b3aa08c4fb288de2fed7b7b8c1d56b2fdf6ff0"
	I1016 19:36:15.348480  452210 cri.go:89] found id: "cb7ea64c57a6b2d64ce9ec1cd5c5305bb5160b5d51cdc02f56727cd3bc062e9f"
	I1016 19:36:15.348483  452210 cri.go:89] found id: "9a54382d4bc8648852c26609dc83acf27dc0010c1d0d9f18fb11f136c720bd41"
	I1016 19:36:15.348492  452210 cri.go:89] found id: "ae50feda76840007009f20128d2985fc95c60eb2bd7543095ac670363b69844c"
	I1016 19:36:15.348496  452210 cri.go:89] found id: "05bbf102a21139c3005b2c4c4c00ba00d6bd04b54f8f16436a691c6a2bde8b9e"
	I1016 19:36:15.348500  452210 cri.go:89] found id: "7abef405427407987cdfc0d38c0f1eb915e50be06735d2c7f67e3abb3b179695"
	I1016 19:36:15.348503  452210 cri.go:89] found id: "453a3e3ee78d58a74340babd2fbcac7b8e92bac974c0a00fe84180b09fcc04a5"
	I1016 19:36:15.348507  452210 cri.go:89] found id: "36bd434b7df4ff2386447f12fc15907a45580613a54171383ed220631e0a295b"
	I1016 19:36:15.348513  452210 cri.go:89] found id: "998613c05e7f15a32fb55e0bc139d53f8fefc8dfe93ddf08bb1d48367009bc13"
	I1016 19:36:15.348521  452210 cri.go:89] found id: "3b392ff5a2e8ee87e2387c57764ba62d125a51fdbb71404ec83edbfb827243a0"
	I1016 19:36:15.348524  452210 cri.go:89] found id: "1fa43c29e504499b5777d8f02c5cdedd9d2cdae2c7b82bcc937a07f2ae00ef16"
	I1016 19:36:15.348527  452210 cri.go:89] found id: "78a959960479c52d4c849b6fa6022c2f23f915fb8f47d0dee2a3b13fbbd7af18"
	I1016 19:36:15.348531  452210 cri.go:89] found id: "976c969aa054f5536aeb2a392d0c178628ec9360569108fed110f8fd94bef670"
	I1016 19:36:15.348535  452210 cri.go:89] found id: "7832a0d4d815d359d4874d18cd9c787088b0d8413ffd5918609a48296d38084e"
	I1016 19:36:15.348544  452210 cri.go:89] found id: "6a93a6454e89deb75178b63bcad9e421253c8cf3ad8cd95dee098c421b8dd117"
	I1016 19:36:15.348548  452210 cri.go:89] found id: ""
	I1016 19:36:15.348598  452210 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 19:36:15.372949  452210 out.go:203] 
	W1016 19:36:15.376357  452210 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:36:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:36:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 19:36:15.376572  452210 out.go:285] * 
	* 
	W1016 19:36:15.386749  452210 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 19:36:15.391725  452210 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-870778 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-870778
helpers_test.go:243: (dbg) docker inspect pause-870778:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37c92bc7f6ae118aaf3fc148d7153ec9e03d6b90e4d3b23269f1a399bcf88b8d",
	        "Created": "2025-10-16T19:34:22.706072665Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 446072,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T19:34:22.771344908Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/37c92bc7f6ae118aaf3fc148d7153ec9e03d6b90e4d3b23269f1a399bcf88b8d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37c92bc7f6ae118aaf3fc148d7153ec9e03d6b90e4d3b23269f1a399bcf88b8d/hostname",
	        "HostsPath": "/var/lib/docker/containers/37c92bc7f6ae118aaf3fc148d7153ec9e03d6b90e4d3b23269f1a399bcf88b8d/hosts",
	        "LogPath": "/var/lib/docker/containers/37c92bc7f6ae118aaf3fc148d7153ec9e03d6b90e4d3b23269f1a399bcf88b8d/37c92bc7f6ae118aaf3fc148d7153ec9e03d6b90e4d3b23269f1a399bcf88b8d-json.log",
	        "Name": "/pause-870778",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-870778:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-870778",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37c92bc7f6ae118aaf3fc148d7153ec9e03d6b90e4d3b23269f1a399bcf88b8d",
	                "LowerDir": "/var/lib/docker/overlay2/b235b7e3599d0f4598d94c98606271186ece95a3a8dc18fc845bcbaf34b7162a-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b235b7e3599d0f4598d94c98606271186ece95a3a8dc18fc845bcbaf34b7162a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b235b7e3599d0f4598d94c98606271186ece95a3a8dc18fc845bcbaf34b7162a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b235b7e3599d0f4598d94c98606271186ece95a3a8dc18fc845bcbaf34b7162a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-870778",
	                "Source": "/var/lib/docker/volumes/pause-870778/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-870778",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-870778",
	                "name.minikube.sigs.k8s.io": "pause-870778",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b7a63c89c3a13182aef061ac751de4e017fc71f001e1cee9b705ed66aa923669",
	            "SandboxKey": "/var/run/docker/netns/b7a63c89c3a1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33388"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33389"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33392"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33390"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33391"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-870778": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:2d:27:9f:bc:15",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4da9b81e2b1a27078af3212586cff121719d63118f6ac0b3eb53ba67d200358c",
	                    "EndpointID": "f8902a2124156a79d6637080bb6327a024d8311c9d01e7703080500bd7e201f5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-870778",
	                        "37c92bc7f6ae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-870778 -n pause-870778
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-870778 -n pause-870778: exit status 2 (391.051249ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-870778 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-870778 logs -n 25: (1.481176187s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-204009 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-204009       │ jenkins │ v1.37.0 │ 16 Oct 25 19:29 UTC │ 16 Oct 25 19:30 UTC │
	│ start   │ -p missing-upgrade-153120 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-153120    │ jenkins │ v1.32.0 │ 16 Oct 25 19:29 UTC │ 16 Oct 25 19:30 UTC │
	│ start   │ -p NoKubernetes-204009 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-204009       │ jenkins │ v1.37.0 │ 16 Oct 25 19:30 UTC │ 16 Oct 25 19:31 UTC │
	│ start   │ -p missing-upgrade-153120 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-153120    │ jenkins │ v1.37.0 │ 16 Oct 25 19:30 UTC │ 16 Oct 25 19:31 UTC │
	│ delete  │ -p missing-upgrade-153120                                                                                                                │ missing-upgrade-153120    │ jenkins │ v1.37.0 │ 16 Oct 25 19:31 UTC │ 16 Oct 25 19:31 UTC │
	│ start   │ -p kubernetes-upgrade-627378 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-627378 │ jenkins │ v1.37.0 │ 16 Oct 25 19:31 UTC │ 16 Oct 25 19:31 UTC │
	│ delete  │ -p NoKubernetes-204009                                                                                                                   │ NoKubernetes-204009       │ jenkins │ v1.37.0 │ 16 Oct 25 19:31 UTC │ 16 Oct 25 19:31 UTC │
	│ start   │ -p NoKubernetes-204009 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-204009       │ jenkins │ v1.37.0 │ 16 Oct 25 19:31 UTC │ 16 Oct 25 19:32 UTC │
	│ stop    │ -p kubernetes-upgrade-627378                                                                                                             │ kubernetes-upgrade-627378 │ jenkins │ v1.37.0 │ 16 Oct 25 19:31 UTC │ 16 Oct 25 19:32 UTC │
	│ start   │ -p kubernetes-upgrade-627378 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-627378 │ jenkins │ v1.37.0 │ 16 Oct 25 19:32 UTC │                     │
	│ ssh     │ -p NoKubernetes-204009 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-204009       │ jenkins │ v1.37.0 │ 16 Oct 25 19:32 UTC │                     │
	│ stop    │ -p NoKubernetes-204009                                                                                                                   │ NoKubernetes-204009       │ jenkins │ v1.37.0 │ 16 Oct 25 19:32 UTC │ 16 Oct 25 19:32 UTC │
	│ start   │ -p NoKubernetes-204009 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-204009       │ jenkins │ v1.37.0 │ 16 Oct 25 19:32 UTC │ 16 Oct 25 19:32 UTC │
	│ ssh     │ -p NoKubernetes-204009 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-204009       │ jenkins │ v1.37.0 │ 16 Oct 25 19:32 UTC │                     │
	│ delete  │ -p NoKubernetes-204009                                                                                                                   │ NoKubernetes-204009       │ jenkins │ v1.37.0 │ 16 Oct 25 19:32 UTC │ 16 Oct 25 19:32 UTC │
	│ start   │ -p stopped-upgrade-284470 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-284470    │ jenkins │ v1.32.0 │ 16 Oct 25 19:32 UTC │ 16 Oct 25 19:32 UTC │
	│ stop    │ stopped-upgrade-284470 stop                                                                                                              │ stopped-upgrade-284470    │ jenkins │ v1.32.0 │ 16 Oct 25 19:32 UTC │ 16 Oct 25 19:32 UTC │
	│ start   │ -p stopped-upgrade-284470 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-284470    │ jenkins │ v1.37.0 │ 16 Oct 25 19:32 UTC │ 16 Oct 25 19:33 UTC │
	│ delete  │ -p stopped-upgrade-284470                                                                                                                │ stopped-upgrade-284470    │ jenkins │ v1.37.0 │ 16 Oct 25 19:33 UTC │ 16 Oct 25 19:33 UTC │
	│ start   │ -p running-upgrade-779500 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-779500    │ jenkins │ v1.32.0 │ 16 Oct 25 19:33 UTC │ 16 Oct 25 19:33 UTC │
	│ start   │ -p running-upgrade-779500 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-779500    │ jenkins │ v1.37.0 │ 16 Oct 25 19:33 UTC │ 16 Oct 25 19:34 UTC │
	│ delete  │ -p running-upgrade-779500                                                                                                                │ running-upgrade-779500    │ jenkins │ v1.37.0 │ 16 Oct 25 19:34 UTC │ 16 Oct 25 19:34 UTC │
	│ start   │ -p pause-870778 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-870778              │ jenkins │ v1.37.0 │ 16 Oct 25 19:34 UTC │ 16 Oct 25 19:35 UTC │
	│ start   │ -p pause-870778 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-870778              │ jenkins │ v1.37.0 │ 16 Oct 25 19:35 UTC │ 16 Oct 25 19:36 UTC │
	│ pause   │ -p pause-870778 --alsologtostderr -v=5                                                                                                   │ pause-870778              │ jenkins │ v1.37.0 │ 16 Oct 25 19:36 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 19:35:41
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 19:35:41.622118  449947 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:35:41.622259  449947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:35:41.622269  449947 out.go:374] Setting ErrFile to fd 2...
	I1016 19:35:41.622274  449947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:35:41.622541  449947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:35:41.622906  449947 out.go:368] Setting JSON to false
	I1016 19:35:41.623864  449947 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8271,"bootTime":1760635071,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 19:35:41.623933  449947 start.go:141] virtualization:  
	I1016 19:35:41.627161  449947 out.go:179] * [pause-870778] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 19:35:41.631008  449947 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 19:35:41.631235  449947 notify.go:220] Checking for updates...
	I1016 19:35:41.636655  449947 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 19:35:41.639395  449947 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:35:41.642316  449947 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 19:35:41.645201  449947 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 19:35:41.648258  449947 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 19:35:41.651752  449947 config.go:182] Loaded profile config "pause-870778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:35:41.652366  449947 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 19:35:41.685309  449947 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 19:35:41.685424  449947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:35:41.749942  449947 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-16 19:35:41.740158242 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:35:41.750059  449947 docker.go:318] overlay module found
	I1016 19:35:41.753254  449947 out.go:179] * Using the docker driver based on existing profile
	I1016 19:35:41.756074  449947 start.go:305] selected driver: docker
	I1016 19:35:41.756098  449947 start.go:925] validating driver "docker" against &{Name:pause-870778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-870778 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:35:41.756223  449947 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 19:35:41.756344  449947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:35:41.881241  449947 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-16 19:35:41.865298543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:35:41.881638  449947 cni.go:84] Creating CNI manager for ""
	I1016 19:35:41.881698  449947 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:35:41.881748  449947 start.go:349] cluster config:
	{Name:pause-870778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-870778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:35:41.886831  449947 out.go:179] * Starting "pause-870778" primary control-plane node in "pause-870778" cluster
	I1016 19:35:41.889453  449947 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 19:35:41.892220  449947 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 19:35:41.895144  449947 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:35:41.895195  449947 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 19:35:41.895206  449947 cache.go:58] Caching tarball of preloaded images
	I1016 19:35:41.895303  449947 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 19:35:41.895312  449947 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 19:35:41.895460  449947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/config.json ...
	I1016 19:35:41.895685  449947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 19:35:41.923094  449947 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 19:35:41.923145  449947 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 19:35:41.923180  449947 cache.go:232] Successfully downloaded all kic artifacts
	I1016 19:35:41.923269  449947 start.go:360] acquireMachinesLock for pause-870778: {Name:mk8801ea66fe5ad45547bf1c2262db986babd029 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:35:41.923434  449947 start.go:364] duration metric: took 119.984µs to acquireMachinesLock for "pause-870778"
	I1016 19:35:41.923462  449947 start.go:96] Skipping create...Using existing machine configuration
	I1016 19:35:41.923473  449947 fix.go:54] fixHost starting: 
	I1016 19:35:41.923844  449947 cli_runner.go:164] Run: docker container inspect pause-870778 --format={{.State.Status}}
	I1016 19:35:41.946179  449947 fix.go:112] recreateIfNeeded on pause-870778: state=Running err=<nil>
	W1016 19:35:41.946218  449947 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 19:35:41.777314  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:35:41.777710  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:35:41.777749  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:35:41.777803  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:35:41.807264  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:41.807284  432201 cri.go:89] found id: ""
	I1016 19:35:41.807293  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:35:41.807376  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:41.817931  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:35:41.818031  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:35:41.859839  432201 cri.go:89] found id: ""
	I1016 19:35:41.859869  432201 logs.go:282] 0 containers: []
	W1016 19:35:41.859878  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:35:41.859884  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:35:41.859942  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:35:41.906359  432201 cri.go:89] found id: ""
	I1016 19:35:41.906379  432201 logs.go:282] 0 containers: []
	W1016 19:35:41.906388  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:35:41.906395  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:35:41.906453  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:35:41.945382  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:41.945401  432201 cri.go:89] found id: ""
	I1016 19:35:41.945410  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:35:41.945465  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:41.953660  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:35:41.953731  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 19:35:41.998843  432201 cri.go:89] found id: ""
	I1016 19:35:41.998869  432201 logs.go:282] 0 containers: []
	W1016 19:35:41.998878  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:35:41.998884  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:35:41.998948  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:35:42.047594  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:42.047620  432201 cri.go:89] found id: "cf52722920cef7f69cfaf4c84f3e09114fc0e90b212c53311a54627e756ba375"
	I1016 19:35:42.047626  432201 cri.go:89] found id: ""
	I1016 19:35:42.047638  432201 logs.go:282] 2 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88 cf52722920cef7f69cfaf4c84f3e09114fc0e90b212c53311a54627e756ba375]
	I1016 19:35:42.047703  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:42.052591  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:42.057532  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:35:42.057605  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:35:42.103677  432201 cri.go:89] found id: ""
	I1016 19:35:42.103702  432201 logs.go:282] 0 containers: []
	W1016 19:35:42.103711  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:35:42.103718  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:35:42.103793  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:35:42.142321  432201 cri.go:89] found id: ""
	I1016 19:35:42.142350  432201 logs.go:282] 0 containers: []
	W1016 19:35:42.142361  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:35:42.142380  432201 logs.go:123] Gathering logs for kubelet ...
	I1016 19:35:42.142394  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 19:35:42.305310  432201 logs.go:123] Gathering logs for dmesg ...
	I1016 19:35:42.305349  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 19:35:42.324886  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:35:42.324935  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:42.403535  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:35:42.403573  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:42.439583  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:35:42.439611  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 19:35:42.508812  432201 logs.go:123] Gathering logs for container status ...
	I1016 19:35:42.508854  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 19:35:42.550392  432201 logs.go:123] Gathering logs for describe nodes ...
	I1016 19:35:42.550427  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 19:35:42.652954  432201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 19:35:42.652977  432201 logs.go:123] Gathering logs for kube-apiserver [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d] ...
	I1016 19:35:42.652990  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:42.694139  432201 logs.go:123] Gathering logs for kube-controller-manager [cf52722920cef7f69cfaf4c84f3e09114fc0e90b212c53311a54627e756ba375] ...
	I1016 19:35:42.694224  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cf52722920cef7f69cfaf4c84f3e09114fc0e90b212c53311a54627e756ba375"
	I1016 19:35:45.224126  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:35:45.224607  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:35:45.224655  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:35:45.224724  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:35:45.261763  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:45.261788  432201 cri.go:89] found id: ""
	I1016 19:35:45.261797  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:35:45.261868  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:45.267070  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:35:45.267158  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:35:41.949324  449947 out.go:252] * Updating the running docker "pause-870778" container ...
	I1016 19:35:41.949361  449947 machine.go:93] provisionDockerMachine start ...
	I1016 19:35:41.949452  449947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-870778
	I1016 19:35:41.974414  449947 main.go:141] libmachine: Using SSH client type: native
	I1016 19:35:41.974744  449947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1016 19:35:41.974760  449947 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 19:35:42.154552  449947 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-870778
	
	I1016 19:35:42.154585  449947 ubuntu.go:182] provisioning hostname "pause-870778"
	I1016 19:35:42.154665  449947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-870778
	I1016 19:35:42.187360  449947 main.go:141] libmachine: Using SSH client type: native
	I1016 19:35:42.187676  449947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1016 19:35:42.187689  449947 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-870778 && echo "pause-870778" | sudo tee /etc/hostname
	I1016 19:35:42.407664  449947 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-870778
	
	I1016 19:35:42.407738  449947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-870778
	I1016 19:35:42.428714  449947 main.go:141] libmachine: Using SSH client type: native
	I1016 19:35:42.429032  449947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1016 19:35:42.429048  449947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-870778' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-870778/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-870778' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 19:35:42.594489  449947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 19:35:42.594575  449947 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 19:35:42.594633  449947 ubuntu.go:190] setting up certificates
	I1016 19:35:42.594665  449947 provision.go:84] configureAuth start
	I1016 19:35:42.594760  449947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-870778
	I1016 19:35:42.614974  449947 provision.go:143] copyHostCerts
	I1016 19:35:42.615046  449947 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 19:35:42.615062  449947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 19:35:42.615139  449947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 19:35:42.615244  449947 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 19:35:42.615249  449947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 19:35:42.615276  449947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 19:35:42.615333  449947 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 19:35:42.615338  449947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 19:35:42.615360  449947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 19:35:42.615438  449947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.pause-870778 san=[127.0.0.1 192.168.76.2 localhost minikube pause-870778]
	I1016 19:35:43.749086  449947 provision.go:177] copyRemoteCerts
	I1016 19:35:43.749177  449947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 19:35:43.749220  449947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-870778
	I1016 19:35:43.768085  449947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/pause-870778/id_rsa Username:docker}
	I1016 19:35:43.873030  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 19:35:43.893687  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1016 19:35:43.911395  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 19:35:43.930398  449947 provision.go:87] duration metric: took 1.335694457s to configureAuth
	I1016 19:35:43.930442  449947 ubuntu.go:206] setting minikube options for container-runtime
	I1016 19:35:43.930660  449947 config.go:182] Loaded profile config "pause-870778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:35:43.930768  449947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-870778
	I1016 19:35:43.948085  449947 main.go:141] libmachine: Using SSH client type: native
	I1016 19:35:43.948401  449947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1016 19:35:43.948423  449947 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 19:35:45.312809  432201 cri.go:89] found id: ""
	I1016 19:35:45.312909  432201 logs.go:282] 0 containers: []
	W1016 19:35:45.312925  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:35:45.312942  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:35:45.313270  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:35:45.379519  432201 cri.go:89] found id: ""
	I1016 19:35:45.379543  432201 logs.go:282] 0 containers: []
	W1016 19:35:45.379552  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:35:45.379560  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:35:45.379629  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:35:45.416624  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:45.416648  432201 cri.go:89] found id: ""
	I1016 19:35:45.416657  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:35:45.416747  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:45.421036  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:35:45.421116  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 19:35:45.452632  432201 cri.go:89] found id: ""
	I1016 19:35:45.452655  432201 logs.go:282] 0 containers: []
	W1016 19:35:45.452665  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:35:45.452671  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:35:45.452729  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:35:45.482130  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:45.482154  432201 cri.go:89] found id: ""
	I1016 19:35:45.482164  432201 logs.go:282] 1 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88]
	I1016 19:35:45.482224  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:45.486221  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:35:45.486298  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:35:45.512496  432201 cri.go:89] found id: ""
	I1016 19:35:45.512561  432201 logs.go:282] 0 containers: []
	W1016 19:35:45.512584  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:35:45.512607  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:35:45.512684  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:35:45.540495  432201 cri.go:89] found id: ""
	I1016 19:35:45.540570  432201 logs.go:282] 0 containers: []
	W1016 19:35:45.540592  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:35:45.540617  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:35:45.540643  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:45.615683  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:35:45.615763  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:45.642368  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:35:45.642398  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 19:35:45.702034  432201 logs.go:123] Gathering logs for container status ...
	I1016 19:35:45.702070  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 19:35:45.733189  432201 logs.go:123] Gathering logs for kubelet ...
	I1016 19:35:45.733221  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 19:35:45.846088  432201 logs.go:123] Gathering logs for dmesg ...
	I1016 19:35:45.846130  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 19:35:45.862980  432201 logs.go:123] Gathering logs for describe nodes ...
	I1016 19:35:45.863011  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 19:35:45.938988  432201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 19:35:45.939013  432201 logs.go:123] Gathering logs for kube-apiserver [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d] ...
	I1016 19:35:45.939026  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:48.474691  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:35:48.475169  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:35:48.475234  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:35:48.475313  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:35:48.502221  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:48.502252  432201 cri.go:89] found id: ""
	I1016 19:35:48.502262  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:35:48.502321  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:48.505893  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:35:48.505965  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:35:48.531717  432201 cri.go:89] found id: ""
	I1016 19:35:48.531744  432201 logs.go:282] 0 containers: []
	W1016 19:35:48.531753  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:35:48.531760  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:35:48.531817  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:35:48.558828  432201 cri.go:89] found id: ""
	I1016 19:35:48.558853  432201 logs.go:282] 0 containers: []
	W1016 19:35:48.558868  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:35:48.558875  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:35:48.558934  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:35:48.586465  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:48.586489  432201 cri.go:89] found id: ""
	I1016 19:35:48.586498  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:35:48.586557  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:48.591303  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:35:48.591450  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 19:35:48.618063  432201 cri.go:89] found id: ""
	I1016 19:35:48.618089  432201 logs.go:282] 0 containers: []
	W1016 19:35:48.618098  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:35:48.618104  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:35:48.618161  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:35:48.648087  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:48.648110  432201 cri.go:89] found id: ""
	I1016 19:35:48.648118  432201 logs.go:282] 1 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88]
	I1016 19:35:48.648181  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:48.651773  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:35:48.651851  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:35:48.677357  432201 cri.go:89] found id: ""
	I1016 19:35:48.677382  432201 logs.go:282] 0 containers: []
	W1016 19:35:48.677390  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:35:48.677396  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:35:48.677454  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:35:48.703982  432201 cri.go:89] found id: ""
	I1016 19:35:48.704007  432201 logs.go:282] 0 containers: []
	W1016 19:35:48.704015  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:35:48.704025  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:35:48.704039  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:48.729155  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:35:48.729182  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 19:35:48.787930  432201 logs.go:123] Gathering logs for container status ...
	I1016 19:35:48.787968  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 19:35:48.822465  432201 logs.go:123] Gathering logs for kubelet ...
	I1016 19:35:48.822496  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 19:35:48.946307  432201 logs.go:123] Gathering logs for dmesg ...
	I1016 19:35:48.946342  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 19:35:48.962419  432201 logs.go:123] Gathering logs for describe nodes ...
	I1016 19:35:48.962447  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 19:35:49.030538  432201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 19:35:49.030562  432201 logs.go:123] Gathering logs for kube-apiserver [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d] ...
	I1016 19:35:49.030576  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:49.063377  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:35:49.063409  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:49.299882  449947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 19:35:49.299907  449947 machine.go:96] duration metric: took 7.350537964s to provisionDockerMachine
	I1016 19:35:49.299918  449947 start.go:293] postStartSetup for "pause-870778" (driver="docker")
	I1016 19:35:49.299929  449947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 19:35:49.299995  449947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 19:35:49.300050  449947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-870778
	I1016 19:35:49.318048  449947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/pause-870778/id_rsa Username:docker}
	I1016 19:35:49.421159  449947 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 19:35:49.424478  449947 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 19:35:49.424508  449947 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 19:35:49.424519  449947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 19:35:49.424576  449947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 19:35:49.424673  449947 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 19:35:49.424785  449947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 19:35:49.432562  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:35:49.451159  449947 start.go:296] duration metric: took 151.224637ms for postStartSetup
	I1016 19:35:49.451236  449947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 19:35:49.451300  449947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-870778
	I1016 19:35:49.467931  449947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/pause-870778/id_rsa Username:docker}
	I1016 19:35:49.566683  449947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 19:35:49.572067  449947 fix.go:56] duration metric: took 7.648586184s for fixHost
	I1016 19:35:49.572095  449947 start.go:83] releasing machines lock for "pause-870778", held for 7.648645188s
	I1016 19:35:49.572181  449947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-870778
	I1016 19:35:49.589305  449947 ssh_runner.go:195] Run: cat /version.json
	I1016 19:35:49.589362  449947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-870778
	I1016 19:35:49.589654  449947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 19:35:49.589725  449947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-870778
	I1016 19:35:49.611923  449947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/pause-870778/id_rsa Username:docker}
	I1016 19:35:49.622807  449947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/pause-870778/id_rsa Username:docker}
	I1016 19:35:49.716947  449947 ssh_runner.go:195] Run: systemctl --version
	I1016 19:35:49.807704  449947 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 19:35:49.851727  449947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 19:35:49.856217  449947 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 19:35:49.856366  449947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 19:35:49.864764  449947 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 19:35:49.864798  449947 start.go:495] detecting cgroup driver to use...
	I1016 19:35:49.864831  449947 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 19:35:49.864884  449947 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 19:35:49.880852  449947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 19:35:49.894240  449947 docker.go:218] disabling cri-docker service (if available) ...
	I1016 19:35:49.894396  449947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 19:35:49.910817  449947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 19:35:49.924618  449947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 19:35:50.075508  449947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 19:35:50.241580  449947 docker.go:234] disabling docker service ...
	I1016 19:35:50.241683  449947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 19:35:50.257922  449947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 19:35:50.272532  449947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 19:35:50.428137  449947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 19:35:50.590140  449947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 19:35:50.603243  449947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 19:35:50.619195  449947 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 19:35:50.619311  449947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:35:50.628109  449947 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 19:35:50.628227  449947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:35:50.640870  449947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:35:50.650081  449947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:35:50.659301  449947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 19:35:50.667599  449947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:35:50.676818  449947 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:35:50.685486  449947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:35:50.694628  449947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 19:35:50.702265  449947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 19:35:50.709809  449947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:35:50.862186  449947 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 19:35:51.049477  449947 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 19:35:51.049570  449947 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 19:35:51.053589  449947 start.go:563] Will wait 60s for crictl version
	I1016 19:35:51.053688  449947 ssh_runner.go:195] Run: which crictl
	I1016 19:35:51.057170  449947 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 19:35:51.082811  449947 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 19:35:51.082915  449947 ssh_runner.go:195] Run: crio --version
	I1016 19:35:51.117084  449947 ssh_runner.go:195] Run: crio --version
	I1016 19:35:51.154762  449947 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 19:35:51.157815  449947 cli_runner.go:164] Run: docker network inspect pause-870778 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:35:51.175734  449947 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1016 19:35:51.180707  449947 kubeadm.go:883] updating cluster {Name:pause-870778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-870778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 19:35:51.180876  449947 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:35:51.180939  449947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:35:51.218820  449947 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:35:51.218845  449947 crio.go:433] Images already preloaded, skipping extraction
	I1016 19:35:51.218903  449947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:35:51.248697  449947 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:35:51.248723  449947 cache_images.go:85] Images are preloaded, skipping loading
	I1016 19:35:51.248731  449947 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1016 19:35:51.248840  449947 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-870778 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-870778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 19:35:51.248920  449947 ssh_runner.go:195] Run: crio config
	I1016 19:35:51.328067  449947 cni.go:84] Creating CNI manager for ""
	I1016 19:35:51.328153  449947 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:35:51.328189  449947 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 19:35:51.328240  449947 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-870778 NodeName:pause-870778 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 19:35:51.328416  449947 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-870778"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 19:35:51.328533  449947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 19:35:51.336550  449947 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 19:35:51.336622  449947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 19:35:51.344293  449947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1016 19:35:51.360660  449947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 19:35:51.374247  449947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1016 19:35:51.387691  449947 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1016 19:35:51.391674  449947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:35:51.533536  449947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:35:51.547239  449947 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778 for IP: 192.168.76.2
	I1016 19:35:51.547264  449947 certs.go:195] generating shared ca certs ...
	I1016 19:35:51.547280  449947 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:35:51.547433  449947 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 19:35:51.547482  449947 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 19:35:51.547497  449947 certs.go:257] generating profile certs ...
	I1016 19:35:51.547612  449947 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/client.key
	I1016 19:35:51.547690  449947 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/apiserver.key.3ad9919e
	I1016 19:35:51.547738  449947 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/proxy-client.key
	I1016 19:35:51.547852  449947 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 19:35:51.547884  449947 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 19:35:51.547897  449947 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 19:35:51.547926  449947 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 19:35:51.547958  449947 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 19:35:51.547984  449947 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 19:35:51.548027  449947 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:35:51.548712  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 19:35:51.567701  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 19:35:51.586325  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 19:35:51.604435  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 19:35:51.622964  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1016 19:35:51.648002  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 19:35:51.674689  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 19:35:51.692706  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1016 19:35:51.721327  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 19:35:51.740229  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 19:35:51.761674  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 19:35:51.789808  449947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 19:35:51.808309  449947 ssh_runner.go:195] Run: openssl version
	I1016 19:35:51.815974  449947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 19:35:51.830970  449947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 19:35:51.838142  449947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 19:35:51.838263  449947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 19:35:51.884700  449947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 19:35:51.894427  449947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 19:35:51.904806  449947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 19:35:51.909986  449947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 19:35:51.910118  449947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 19:35:51.954082  449947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 19:35:51.966072  449947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 19:35:51.978962  449947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:35:51.983067  449947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:35:51.983138  449947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:35:52.025700  449947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 19:35:52.034384  449947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 19:35:52.038932  449947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 19:35:52.087478  449947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 19:35:52.138215  449947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 19:35:52.183882  449947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 19:35:52.229427  449947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 19:35:52.293253  449947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 19:35:52.459350  449947 kubeadm.go:400] StartCluster: {Name:pause-870778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-870778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:35:52.459474  449947 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 19:35:52.459532  449947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 19:35:52.702295  449947 cri.go:89] found id: "cb7ea64c57a6b2d64ce9ec1cd5c5305bb5160b5d51cdc02f56727cd3bc062e9f"
	I1016 19:35:52.702318  449947 cri.go:89] found id: "9a54382d4bc8648852c26609dc83acf27dc0010c1d0d9f18fb11f136c720bd41"
	I1016 19:35:52.702323  449947 cri.go:89] found id: "ae50feda76840007009f20128d2985fc95c60eb2bd7543095ac670363b69844c"
	I1016 19:35:52.702327  449947 cri.go:89] found id: "7abef405427407987cdfc0d38c0f1eb915e50be06735d2c7f67e3abb3b179695"
	I1016 19:35:52.702350  449947 cri.go:89] found id: "453a3e3ee78d58a74340babd2fbcac7b8e92bac974c0a00fe84180b09fcc04a5"
	I1016 19:35:52.702355  449947 cri.go:89] found id: "36bd434b7df4ff2386447f12fc15907a45580613a54171383ed220631e0a295b"
	I1016 19:35:52.702362  449947 cri.go:89] found id: "998613c05e7f15a32fb55e0bc139d53f8fefc8dfe93ddf08bb1d48367009bc13"
	I1016 19:35:52.702366  449947 cri.go:89] found id: "3b392ff5a2e8ee87e2387c57764ba62d125a51fdbb71404ec83edbfb827243a0"
	I1016 19:35:52.702369  449947 cri.go:89] found id: "1fa43c29e504499b5777d8f02c5cdedd9d2cdae2c7b82bcc937a07f2ae00ef16"
	I1016 19:35:52.702380  449947 cri.go:89] found id: "78a959960479c52d4c849b6fa6022c2f23f915fb8f47d0dee2a3b13fbbd7af18"
	I1016 19:35:52.702386  449947 cri.go:89] found id: "976c969aa054f5536aeb2a392d0c178628ec9360569108fed110f8fd94bef670"
	I1016 19:35:52.702390  449947 cri.go:89] found id: "7832a0d4d815d359d4874d18cd9c787088b0d8413ffd5918609a48296d38084e"
	I1016 19:35:52.702393  449947 cri.go:89] found id: "6a93a6454e89deb75178b63bcad9e421253c8cf3ad8cd95dee098c421b8dd117"
	I1016 19:35:52.702396  449947 cri.go:89] found id: ""
	I1016 19:35:52.702446  449947 ssh_runner.go:195] Run: sudo runc list -f json
	W1016 19:35:52.738625  449947 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:35:52Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:35:52.738698  449947 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 19:35:52.757284  449947 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 19:35:52.757305  449947 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 19:35:52.757353  449947 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 19:35:52.769802  449947 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 19:35:52.775472  449947 kubeconfig.go:125] found "pause-870778" server: "https://192.168.76.2:8443"
	I1016 19:35:52.776564  449947 kapi.go:59] client config for pause-870778: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/client.key", CAFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1016 19:35:52.777099  449947 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1016 19:35:52.777120  449947 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1016 19:35:52.777149  449947 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1016 19:35:52.777159  449947 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1016 19:35:52.777164  449947 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1016 19:35:52.777481  449947 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 19:35:52.791287  449947 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1016 19:35:52.791322  449947 kubeadm.go:601] duration metric: took 34.010854ms to restartPrimaryControlPlane
	I1016 19:35:52.791332  449947 kubeadm.go:402] duration metric: took 331.991514ms to StartCluster
	I1016 19:35:52.791348  449947 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:35:52.791413  449947 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:35:52.792349  449947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:35:52.792564  449947 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:35:52.792877  449947 config.go:182] Loaded profile config "pause-870778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:35:52.792924  449947 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 19:35:52.796352  449947 out.go:179] * Verifying Kubernetes components...
	I1016 19:35:52.796353  449947 out.go:179] * Enabled addons: 
	I1016 19:35:51.624060  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:35:51.624409  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:35:51.624458  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:35:51.624514  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:35:51.666085  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:51.666109  432201 cri.go:89] found id: ""
	I1016 19:35:51.666119  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:35:51.666174  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:51.671693  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:35:51.671769  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:35:51.716899  432201 cri.go:89] found id: ""
	I1016 19:35:51.716924  432201 logs.go:282] 0 containers: []
	W1016 19:35:51.716933  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:35:51.716939  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:35:51.717000  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:35:51.759158  432201 cri.go:89] found id: ""
	I1016 19:35:51.759185  432201 logs.go:282] 0 containers: []
	W1016 19:35:51.759193  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:35:51.759201  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:35:51.759259  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:35:51.800567  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:51.800592  432201 cri.go:89] found id: ""
	I1016 19:35:51.800601  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:35:51.800696  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:51.804831  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:35:51.804905  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 19:35:51.836863  432201 cri.go:89] found id: ""
	I1016 19:35:51.836891  432201 logs.go:282] 0 containers: []
	W1016 19:35:51.836900  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:35:51.836906  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:35:51.836967  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:35:51.876412  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:51.876435  432201 cri.go:89] found id: ""
	I1016 19:35:51.876443  432201 logs.go:282] 1 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88]
	I1016 19:35:51.876501  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:51.880764  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:35:51.880866  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:35:51.911793  432201 cri.go:89] found id: ""
	I1016 19:35:51.911817  432201 logs.go:282] 0 containers: []
	W1016 19:35:51.911825  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:35:51.911832  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:35:51.911952  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:35:51.947496  432201 cri.go:89] found id: ""
	I1016 19:35:51.947522  432201 logs.go:282] 0 containers: []
	W1016 19:35:51.947531  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:35:51.947540  432201 logs.go:123] Gathering logs for dmesg ...
	I1016 19:35:51.947579  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 19:35:51.972497  432201 logs.go:123] Gathering logs for describe nodes ...
	I1016 19:35:51.972527  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 19:35:52.086339  432201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 19:35:52.086365  432201 logs.go:123] Gathering logs for kube-apiserver [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d] ...
	I1016 19:35:52.086379  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:52.143960  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:35:52.143993  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:52.250937  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:35:52.251014  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:52.300844  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:35:52.300877  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 19:35:52.389784  432201 logs.go:123] Gathering logs for container status ...
	I1016 19:35:52.389877  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 19:35:52.439877  432201 logs.go:123] Gathering logs for kubelet ...
	I1016 19:35:52.439902  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 19:35:55.113924  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:35:55.114316  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:35:55.114357  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:35:55.114411  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:35:55.167548  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:55.167567  432201 cri.go:89] found id: ""
	I1016 19:35:55.167575  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:35:55.167632  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:55.171447  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:35:55.171514  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:35:55.217611  432201 cri.go:89] found id: ""
	I1016 19:35:55.217632  432201 logs.go:282] 0 containers: []
	W1016 19:35:55.217640  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:35:55.217647  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:35:55.217708  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:35:52.799400  449947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:35:52.799566  449947 addons.go:514] duration metric: took 6.642148ms for enable addons: enabled=[]
	I1016 19:35:53.104876  449947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:35:53.121775  449947 node_ready.go:35] waiting up to 6m0s for node "pause-870778" to be "Ready" ...
	I1016 19:35:55.277373  432201 cri.go:89] found id: ""
	I1016 19:35:55.277394  432201 logs.go:282] 0 containers: []
	W1016 19:35:55.277402  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:35:55.277420  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:35:55.277480  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:35:55.316798  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:55.316870  432201 cri.go:89] found id: ""
	I1016 19:35:55.316881  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:35:55.316973  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:55.321029  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:35:55.321096  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 19:35:55.376302  432201 cri.go:89] found id: ""
	I1016 19:35:55.376324  432201 logs.go:282] 0 containers: []
	W1016 19:35:55.376332  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:35:55.376339  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:35:55.376398  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:35:55.419560  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:55.419579  432201 cri.go:89] found id: ""
	I1016 19:35:55.419596  432201 logs.go:282] 1 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88]
	I1016 19:35:55.419654  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:55.424012  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:35:55.424079  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:35:55.458286  432201 cri.go:89] found id: ""
	I1016 19:35:55.458362  432201 logs.go:282] 0 containers: []
	W1016 19:35:55.458384  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:35:55.458404  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:35:55.458501  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:35:55.498252  432201 cri.go:89] found id: ""
	I1016 19:35:55.498274  432201 logs.go:282] 0 containers: []
	W1016 19:35:55.498282  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:35:55.498292  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:35:55.498303  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 19:35:55.571497  432201 logs.go:123] Gathering logs for container status ...
	I1016 19:35:55.571598  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 19:35:55.637787  432201 logs.go:123] Gathering logs for kubelet ...
	I1016 19:35:55.637857  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 19:35:55.784223  432201 logs.go:123] Gathering logs for dmesg ...
	I1016 19:35:55.784333  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 19:35:55.801905  432201 logs.go:123] Gathering logs for describe nodes ...
	I1016 19:35:55.801936  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 19:35:55.933815  432201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 19:35:55.933890  432201 logs.go:123] Gathering logs for kube-apiserver [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d] ...
	I1016 19:35:55.933919  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:55.991654  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:35:55.991728  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:56.101900  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:35:56.101935  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:58.645763  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:35:58.646275  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:35:58.646326  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:35:58.646386  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:35:58.677929  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:58.677949  432201 cri.go:89] found id: ""
	I1016 19:35:58.677958  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:35:58.678019  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:58.682034  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:35:58.682115  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:35:58.707887  432201 cri.go:89] found id: ""
	I1016 19:35:58.707908  432201 logs.go:282] 0 containers: []
	W1016 19:35:58.707916  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:35:58.707923  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:35:58.707979  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:35:58.734327  432201 cri.go:89] found id: ""
	I1016 19:35:58.734349  432201 logs.go:282] 0 containers: []
	W1016 19:35:58.734357  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:35:58.734363  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:35:58.734421  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:35:58.767958  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:58.767977  432201 cri.go:89] found id: ""
	I1016 19:35:58.767986  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:35:58.768044  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:58.772350  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:35:58.772468  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 19:35:58.798419  432201 cri.go:89] found id: ""
	I1016 19:35:58.798442  432201 logs.go:282] 0 containers: []
	W1016 19:35:58.798451  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:35:58.798458  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:35:58.798519  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:35:58.824036  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:58.824114  432201 cri.go:89] found id: ""
	I1016 19:35:58.824140  432201 logs.go:282] 1 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88]
	I1016 19:35:58.824219  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:58.827929  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:35:58.828004  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:35:58.855116  432201 cri.go:89] found id: ""
	I1016 19:35:58.855152  432201 logs.go:282] 0 containers: []
	W1016 19:35:58.855161  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:35:58.855187  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:35:58.855301  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:35:58.882514  432201 cri.go:89] found id: ""
	I1016 19:35:58.882539  432201 logs.go:282] 0 containers: []
	W1016 19:35:58.882551  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:35:58.882561  432201 logs.go:123] Gathering logs for describe nodes ...
	I1016 19:35:58.882573  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 19:35:58.985618  432201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 19:35:58.985650  432201 logs.go:123] Gathering logs for kube-apiserver [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d] ...
	I1016 19:35:58.985665  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:59.040349  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:35:59.040391  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:59.137878  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:35:59.137916  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:59.168248  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:35:59.168273  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 19:35:59.234014  432201 logs.go:123] Gathering logs for container status ...
	I1016 19:35:59.234052  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 19:35:59.265710  432201 logs.go:123] Gathering logs for kubelet ...
	I1016 19:35:59.265739  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 19:35:59.396140  432201 logs.go:123] Gathering logs for dmesg ...
	I1016 19:35:59.396176  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 19:35:57.021245  449947 node_ready.go:49] node "pause-870778" is "Ready"
	I1016 19:35:57.021272  449947 node_ready.go:38] duration metric: took 3.899458885s for node "pause-870778" to be "Ready" ...
	I1016 19:35:57.021287  449947 api_server.go:52] waiting for apiserver process to appear ...
	I1016 19:35:57.021349  449947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 19:35:57.043192  449947 api_server.go:72] duration metric: took 4.250580864s to wait for apiserver process to appear ...
	I1016 19:35:57.043214  449947 api_server.go:88] waiting for apiserver healthz status ...
	I1016 19:35:57.043233  449947 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 19:35:57.120670  449947 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 19:35:57.120783  449947 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 19:35:57.543411  449947 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 19:35:57.555963  449947 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 19:35:57.555997  449947 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 19:35:58.043377  449947 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 19:35:58.054511  449947 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1016 19:35:58.055730  449947 api_server.go:141] control plane version: v1.34.1
	I1016 19:35:58.055811  449947 api_server.go:131] duration metric: took 1.012589085s to wait for apiserver health ...
	I1016 19:35:58.055835  449947 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 19:35:58.059674  449947 system_pods.go:59] 8 kube-system pods found
	I1016 19:35:58.059715  449947 system_pods.go:61] "coredns-66bc5c9577-j2chq" [5654ae7b-c8b7-43ca-a406-a2b469ab6a89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:35:58.059757  449947 system_pods.go:61] "coredns-66bc5c9577-vhkhz" [a0654543-a145-4d72-961a-72e07066dcf9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:35:58.059771  449947 system_pods.go:61] "etcd-pause-870778" [547edc9d-3421-475f-bb7b-661d90b63c00] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 19:35:58.059785  449947 system_pods.go:61] "kindnet-tljwg" [1f3c571f-2279-4d82-af72-febc2dd3f054] Running
	I1016 19:35:58.059798  449947 system_pods.go:61] "kube-apiserver-pause-870778" [a1ebe7af-c4db-41e2-943a-f4697671b7b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 19:35:58.059819  449947 system_pods.go:61] "kube-controller-manager-pause-870778" [fbd3c061-36a0-4f91-809d-b0ac670cc309] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 19:35:58.059835  449947 system_pods.go:61] "kube-proxy-x4dmw" [2ee80808-9395-44ba-aeee-51c69c0b1f69] Running
	I1016 19:35:58.059845  449947 system_pods.go:61] "kube-scheduler-pause-870778" [a48c3a35-740b-40f2-abf6-b13e1e0ad761] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 19:35:58.059860  449947 system_pods.go:74] duration metric: took 4.002992ms to wait for pod list to return data ...
	I1016 19:35:58.059876  449947 default_sa.go:34] waiting for default service account to be created ...
	I1016 19:35:58.063076  449947 default_sa.go:45] found service account: "default"
	I1016 19:35:58.063106  449947 default_sa.go:55] duration metric: took 3.222707ms for default service account to be created ...
	I1016 19:35:58.063117  449947 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 19:35:58.067074  449947 system_pods.go:86] 8 kube-system pods found
	I1016 19:35:58.067128  449947 system_pods.go:89] "coredns-66bc5c9577-j2chq" [5654ae7b-c8b7-43ca-a406-a2b469ab6a89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:35:58.067141  449947 system_pods.go:89] "coredns-66bc5c9577-vhkhz" [a0654543-a145-4d72-961a-72e07066dcf9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:35:58.067185  449947 system_pods.go:89] "etcd-pause-870778" [547edc9d-3421-475f-bb7b-661d90b63c00] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 19:35:58.067200  449947 system_pods.go:89] "kindnet-tljwg" [1f3c571f-2279-4d82-af72-febc2dd3f054] Running
	I1016 19:35:58.067206  449947 system_pods.go:89] "kube-apiserver-pause-870778" [a1ebe7af-c4db-41e2-943a-f4697671b7b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 19:35:58.067215  449947 system_pods.go:89] "kube-controller-manager-pause-870778" [fbd3c061-36a0-4f91-809d-b0ac670cc309] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 19:35:58.067224  449947 system_pods.go:89] "kube-proxy-x4dmw" [2ee80808-9395-44ba-aeee-51c69c0b1f69] Running
	I1016 19:35:58.067231  449947 system_pods.go:89] "kube-scheduler-pause-870778" [a48c3a35-740b-40f2-abf6-b13e1e0ad761] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 19:35:58.067246  449947 system_pods.go:126] duration metric: took 4.116011ms to wait for k8s-apps to be running ...
	I1016 19:35:58.067259  449947 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 19:35:58.067318  449947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:35:58.083369  449947 system_svc.go:56] duration metric: took 16.099494ms WaitForService to wait for kubelet
	I1016 19:35:58.083398  449947 kubeadm.go:586] duration metric: took 5.290792011s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:35:58.083417  449947 node_conditions.go:102] verifying NodePressure condition ...
	I1016 19:35:58.086767  449947 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 19:35:58.086801  449947 node_conditions.go:123] node cpu capacity is 2
	I1016 19:35:58.086815  449947 node_conditions.go:105] duration metric: took 3.392217ms to run NodePressure ...
	I1016 19:35:58.086828  449947 start.go:241] waiting for startup goroutines ...
	I1016 19:35:58.086836  449947 start.go:246] waiting for cluster config update ...
	I1016 19:35:58.086844  449947 start.go:255] writing updated cluster config ...
	I1016 19:35:58.087204  449947 ssh_runner.go:195] Run: rm -f paused
	I1016 19:35:58.091592  449947 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:35:58.092360  449947 kapi.go:59] client config for pause-870778: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/client.key", CAFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1016 19:35:58.096346  449947 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-j2chq" in "kube-system" namespace to be "Ready" or be gone ...
	W1016 19:36:00.169876  449947 pod_ready.go:104] pod "coredns-66bc5c9577-j2chq" is not "Ready", error: <nil>
	I1016 19:36:01.914869  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:36:01.915319  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:36:01.915376  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:36:01.915433  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:36:01.942668  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:36:01.942731  432201 cri.go:89] found id: ""
	I1016 19:36:01.942765  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:36:01.942834  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:01.946609  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:36:01.946708  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:36:01.974062  432201 cri.go:89] found id: ""
	I1016 19:36:01.974083  432201 logs.go:282] 0 containers: []
	W1016 19:36:01.974092  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:36:01.974099  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:36:01.974182  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:36:02.000901  432201 cri.go:89] found id: ""
	I1016 19:36:02.000924  432201 logs.go:282] 0 containers: []
	W1016 19:36:02.000933  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:36:02.000939  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:36:02.001042  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:36:02.032454  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:36:02.032479  432201 cri.go:89] found id: ""
	I1016 19:36:02.032488  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:36:02.032581  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:02.037108  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:36:02.037235  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 19:36:02.069355  432201 cri.go:89] found id: ""
	I1016 19:36:02.069376  432201 logs.go:282] 0 containers: []
	W1016 19:36:02.069385  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:36:02.069422  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:36:02.069510  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:36:02.106166  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:36:02.106188  432201 cri.go:89] found id: ""
	I1016 19:36:02.106197  432201 logs.go:282] 1 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88]
	I1016 19:36:02.106285  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:02.110591  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:36:02.110699  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:36:02.144585  432201 cri.go:89] found id: ""
	I1016 19:36:02.144611  432201 logs.go:282] 0 containers: []
	W1016 19:36:02.144619  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:36:02.144626  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:36:02.144713  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:36:02.172778  432201 cri.go:89] found id: ""
	I1016 19:36:02.172810  432201 logs.go:282] 0 containers: []
	W1016 19:36:02.172818  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:36:02.172828  432201 logs.go:123] Gathering logs for kube-apiserver [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d] ...
	I1016 19:36:02.172871  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:36:02.206091  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:36:02.206128  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:36:02.270642  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:36:02.270679  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:36:02.297468  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:36:02.297496  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 19:36:02.361841  432201 logs.go:123] Gathering logs for container status ...
	I1016 19:36:02.361878  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 19:36:02.407367  432201 logs.go:123] Gathering logs for kubelet ...
	I1016 19:36:02.407393  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 19:36:02.536221  432201 logs.go:123] Gathering logs for dmesg ...
	I1016 19:36:02.536311  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 19:36:02.554810  432201 logs.go:123] Gathering logs for describe nodes ...
	I1016 19:36:02.554839  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 19:36:02.633122  432201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 19:36:05.133343  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:36:05.133830  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:36:05.133904  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:36:05.133982  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:36:05.161489  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:36:05.161521  432201 cri.go:89] found id: ""
	I1016 19:36:05.161530  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:36:05.161598  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:05.165838  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:36:05.165955  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:36:05.192500  432201 cri.go:89] found id: ""
	I1016 19:36:05.192525  432201 logs.go:282] 0 containers: []
	W1016 19:36:05.192534  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:36:05.192541  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:36:05.192612  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:36:05.221023  432201 cri.go:89] found id: ""
	I1016 19:36:05.221051  432201 logs.go:282] 0 containers: []
	W1016 19:36:05.221060  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:36:05.221067  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:36:05.221124  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:36:05.256554  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:36:05.256578  432201 cri.go:89] found id: ""
	I1016 19:36:05.256587  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:36:05.256653  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:05.260604  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:36:05.260701  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	W1016 19:36:02.602907  449947 pod_ready.go:104] pod "coredns-66bc5c9577-j2chq" is not "Ready", error: <nil>
	I1016 19:36:04.601652  449947 pod_ready.go:94] pod "coredns-66bc5c9577-j2chq" is "Ready"
	I1016 19:36:04.601681  449947 pod_ready.go:86] duration metric: took 6.505303257s for pod "coredns-66bc5c9577-j2chq" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:04.601691  449947 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vhkhz" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:04.606659  449947 pod_ready.go:94] pod "coredns-66bc5c9577-vhkhz" is "Ready"
	I1016 19:36:04.606688  449947 pod_ready.go:86] duration metric: took 4.990287ms for pod "coredns-66bc5c9577-vhkhz" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:04.609655  449947 pod_ready.go:83] waiting for pod "etcd-pause-870778" in "kube-system" namespace to be "Ready" or be gone ...
	W1016 19:36:06.616311  449947 pod_ready.go:104] pod "etcd-pause-870778" is not "Ready", error: <nil>
	I1016 19:36:05.289453  432201 cri.go:89] found id: ""
	I1016 19:36:05.289478  432201 logs.go:282] 0 containers: []
	W1016 19:36:05.289487  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:36:05.289493  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:36:05.289597  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:36:05.322412  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:36:05.322437  432201 cri.go:89] found id: ""
	I1016 19:36:05.322446  432201 logs.go:282] 1 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88]
	I1016 19:36:05.322517  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:05.326576  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:36:05.326658  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:36:05.361976  432201 cri.go:89] found id: ""
	I1016 19:36:05.362002  432201 logs.go:282] 0 containers: []
	W1016 19:36:05.362019  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:36:05.362026  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:36:05.362085  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:36:05.390534  432201 cri.go:89] found id: ""
	I1016 19:36:05.390565  432201 logs.go:282] 0 containers: []
	W1016 19:36:05.390574  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:36:05.390586  432201 logs.go:123] Gathering logs for kubelet ...
	I1016 19:36:05.390597  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 19:36:05.510838  432201 logs.go:123] Gathering logs for dmesg ...
	I1016 19:36:05.510878  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 19:36:05.531695  432201 logs.go:123] Gathering logs for describe nodes ...
	I1016 19:36:05.531731  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 19:36:05.620173  432201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 19:36:05.620196  432201 logs.go:123] Gathering logs for kube-apiserver [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d] ...
	I1016 19:36:05.620209  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:36:05.654535  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:36:05.654569  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:36:05.721323  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:36:05.721357  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:36:05.750367  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:36:05.750396  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 19:36:05.812431  432201 logs.go:123] Gathering logs for container status ...
	I1016 19:36:05.812470  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 19:36:08.346593  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:36:08.346945  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:36:08.346983  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:36:08.347034  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:36:08.374990  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:36:08.375024  432201 cri.go:89] found id: ""
	I1016 19:36:08.375033  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:36:08.375101  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:08.379048  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:36:08.379119  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:36:08.407979  432201 cri.go:89] found id: ""
	I1016 19:36:08.408001  432201 logs.go:282] 0 containers: []
	W1016 19:36:08.408010  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:36:08.408016  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:36:08.408075  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:36:08.439179  432201 cri.go:89] found id: ""
	I1016 19:36:08.439203  432201 logs.go:282] 0 containers: []
	W1016 19:36:08.439211  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:36:08.439218  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:36:08.439284  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:36:08.467360  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:36:08.467383  432201 cri.go:89] found id: ""
	I1016 19:36:08.467392  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:36:08.467450  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:08.471427  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:36:08.471504  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 19:36:08.498956  432201 cri.go:89] found id: ""
	I1016 19:36:08.498978  432201 logs.go:282] 0 containers: []
	W1016 19:36:08.498986  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:36:08.498992  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:36:08.499055  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:36:08.534062  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:36:08.534086  432201 cri.go:89] found id: ""
	I1016 19:36:08.534094  432201 logs.go:282] 1 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88]
	I1016 19:36:08.534151  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:08.538437  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:36:08.538532  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:36:08.564760  432201 cri.go:89] found id: ""
	I1016 19:36:08.564798  432201 logs.go:282] 0 containers: []
	W1016 19:36:08.564823  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:36:08.564832  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:36:08.564909  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:36:08.592599  432201 cri.go:89] found id: ""
	I1016 19:36:08.592626  432201 logs.go:282] 0 containers: []
	W1016 19:36:08.592635  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:36:08.592644  432201 logs.go:123] Gathering logs for describe nodes ...
	I1016 19:36:08.592656  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 19:36:08.670096  432201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 19:36:08.670118  432201 logs.go:123] Gathering logs for kube-apiserver [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d] ...
	I1016 19:36:08.670131  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:36:08.702830  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:36:08.702864  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:36:08.770726  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:36:08.770766  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:36:08.797493  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:36:08.797521  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 19:36:08.859192  432201 logs.go:123] Gathering logs for container status ...
	I1016 19:36:08.859228  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 19:36:08.890327  432201 logs.go:123] Gathering logs for kubelet ...
	I1016 19:36:08.890353  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 19:36:09.014778  432201 logs.go:123] Gathering logs for dmesg ...
	I1016 19:36:09.014815  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1016 19:36:09.115798  449947 pod_ready.go:104] pod "etcd-pause-870778" is not "Ready", error: <nil>
	I1016 19:36:11.616476  449947 pod_ready.go:94] pod "etcd-pause-870778" is "Ready"
	I1016 19:36:11.616507  449947 pod_ready.go:86] duration metric: took 7.00682367s for pod "etcd-pause-870778" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:11.619637  449947 pod_ready.go:83] waiting for pod "kube-apiserver-pause-870778" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:12.160905  449947 pod_ready.go:94] pod "kube-apiserver-pause-870778" is "Ready"
	I1016 19:36:12.160930  449947 pod_ready.go:86] duration metric: took 541.259614ms for pod "kube-apiserver-pause-870778" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:12.172262  449947 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-870778" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:12.185402  449947 pod_ready.go:94] pod "kube-controller-manager-pause-870778" is "Ready"
	I1016 19:36:12.185427  449947 pod_ready.go:86] duration metric: took 13.141572ms for pod "kube-controller-manager-pause-870778" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:12.198318  449947 pod_ready.go:83] waiting for pod "kube-proxy-x4dmw" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:12.220625  449947 pod_ready.go:94] pod "kube-proxy-x4dmw" is "Ready"
	I1016 19:36:12.220648  449947 pod_ready.go:86] duration metric: took 22.306407ms for pod "kube-proxy-x4dmw" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:12.413797  449947 pod_ready.go:83] waiting for pod "kube-scheduler-pause-870778" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:12.813022  449947 pod_ready.go:94] pod "kube-scheduler-pause-870778" is "Ready"
	I1016 19:36:12.813052  449947 pod_ready.go:86] duration metric: took 399.228205ms for pod "kube-scheduler-pause-870778" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:12.813064  449947 pod_ready.go:40] duration metric: took 14.721436649s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:36:12.869476  449947 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1016 19:36:12.872671  449947 out.go:179] * Done! kubectl is now configured to use "pause-870778" cluster and "default" namespace by default
	I1016 19:36:11.532133  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:36:11.532609  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:36:11.532652  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:36:11.532724  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:36:11.581116  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:36:11.581162  432201 cri.go:89] found id: ""
	I1016 19:36:11.581172  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:36:11.581229  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:11.586933  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:36:11.587058  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:36:11.623670  432201 cri.go:89] found id: ""
	I1016 19:36:11.623743  432201 logs.go:282] 0 containers: []
	W1016 19:36:11.623765  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:36:11.623788  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:36:11.623911  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:36:11.656560  432201 cri.go:89] found id: ""
	I1016 19:36:11.656644  432201 logs.go:282] 0 containers: []
	W1016 19:36:11.656668  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:36:11.656691  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:36:11.656822  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:36:11.685874  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:36:11.685896  432201 cri.go:89] found id: ""
	I1016 19:36:11.685906  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:36:11.685984  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:11.689659  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:36:11.689759  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 19:36:11.714889  432201 cri.go:89] found id: ""
	I1016 19:36:11.714915  432201 logs.go:282] 0 containers: []
	W1016 19:36:11.714923  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:36:11.714930  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:36:11.715050  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:36:11.742632  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:36:11.742654  432201 cri.go:89] found id: ""
	I1016 19:36:11.742663  432201 logs.go:282] 1 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88]
	I1016 19:36:11.742761  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:11.746451  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:36:11.746552  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:36:11.775062  432201 cri.go:89] found id: ""
	I1016 19:36:11.775145  432201 logs.go:282] 0 containers: []
	W1016 19:36:11.775169  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:36:11.775192  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:36:11.775257  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:36:11.809728  432201 cri.go:89] found id: ""
	I1016 19:36:11.809813  432201 logs.go:282] 0 containers: []
	W1016 19:36:11.809837  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:36:11.809880  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:36:11.809911  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:36:11.889311  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:36:11.889350  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:36:11.917496  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:36:11.917524  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 19:36:11.979808  432201 logs.go:123] Gathering logs for container status ...
	I1016 19:36:11.979847  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 19:36:12.026240  432201 logs.go:123] Gathering logs for kubelet ...
	I1016 19:36:12.026273  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 19:36:12.167563  432201 logs.go:123] Gathering logs for dmesg ...
	I1016 19:36:12.167683  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 19:36:12.191842  432201 logs.go:123] Gathering logs for describe nodes ...
	I1016 19:36:12.191878  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 19:36:12.275232  432201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 19:36:12.275255  432201 logs.go:123] Gathering logs for kube-apiserver [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d] ...
	I1016 19:36:12.275268  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:36:14.809387  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:36:14.809793  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:36:14.809841  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:36:14.809903  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:36:14.841599  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:36:14.841621  432201 cri.go:89] found id: ""
	I1016 19:36:14.841630  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:36:14.841686  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:14.845445  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:36:14.845529  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:36:14.872194  432201 cri.go:89] found id: ""
	I1016 19:36:14.872221  432201 logs.go:282] 0 containers: []
	W1016 19:36:14.872229  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:36:14.872236  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:36:14.872297  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:36:14.898257  432201 cri.go:89] found id: ""
	I1016 19:36:14.898283  432201 logs.go:282] 0 containers: []
	W1016 19:36:14.898291  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:36:14.898298  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:36:14.898360  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:36:14.925331  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:36:14.925353  432201 cri.go:89] found id: ""
	I1016 19:36:14.925361  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:36:14.925419  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:14.929314  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:36:14.929388  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 19:36:14.956388  432201 cri.go:89] found id: ""
	I1016 19:36:14.956412  432201 logs.go:282] 0 containers: []
	W1016 19:36:14.956420  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:36:14.956426  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:36:14.956487  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:36:14.983506  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:36:14.983527  432201 cri.go:89] found id: ""
	I1016 19:36:14.983537  432201 logs.go:282] 1 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88]
	I1016 19:36:14.983619  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:14.988126  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:36:14.988196  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:36:15.033311  432201 cri.go:89] found id: ""
	I1016 19:36:15.033347  432201 logs.go:282] 0 containers: []
	W1016 19:36:15.033358  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:36:15.033365  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:36:15.033439  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:36:15.083841  432201 cri.go:89] found id: ""
	I1016 19:36:15.083869  432201 logs.go:282] 0 containers: []
	W1016 19:36:15.083878  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:36:15.083887  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:36:15.083900  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:36:15.164897  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:36:15.164937  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:36:15.207989  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:36:15.208019  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	
	
	==> CRI-O <==
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.680151397Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.681440362Z" level=info msg="Started container" PID=2359 containerID=9a54382d4bc8648852c26609dc83acf27dc0010c1d0d9f18fb11f136c720bd41 description=kube-system/kindnet-tljwg/kindnet-cni id=393b7077-e668-441f-beef-49405fac3759 name=/runtime.v1.RuntimeService/StartContainer sandboxID=24c574eb8d980f94b71dce0f1fa0488aff165b4b550683df9ef78edda497b152
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.677528463Z" level=info msg="Starting container: cb7ea64c57a6b2d64ce9ec1cd5c5305bb5160b5d51cdc02f56727cd3bc062e9f" id=82b3203c-e4b5-480b-970f-d32840cb40f3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.692862086Z" level=info msg="Started container" PID=2362 containerID=cb7ea64c57a6b2d64ce9ec1cd5c5305bb5160b5d51cdc02f56727cd3bc062e9f description=kube-system/coredns-66bc5c9577-j2chq/coredns id=82b3203c-e4b5-480b-970f-d32840cb40f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6392c90f6af90784c71f61e7a4bf28c524dad3a05b4502db375c805e9a1f1753
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.717523732Z" level=info msg="Created container 05bbf102a21139c3005b2c4c4c00ba00d6bd04b54f8f16436a691c6a2bde8b9e: kube-system/etcd-pause-870778/etcd" id=4d966c5f-69dc-4394-bffe-e5df0408c568 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.720208557Z" level=info msg="Starting container: 05bbf102a21139c3005b2c4c4c00ba00d6bd04b54f8f16436a691c6a2bde8b9e" id=3e90ea73-e9bc-4007-904f-efeb542de07d name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.723550878Z" level=info msg="Started container" PID=2372 containerID=05bbf102a21139c3005b2c4c4c00ba00d6bd04b54f8f16436a691c6a2bde8b9e description=kube-system/etcd-pause-870778/etcd id=3e90ea73-e9bc-4007-904f-efeb542de07d name=/runtime.v1.RuntimeService/StartContainer sandboxID=50ed380698d1c6e0c0e39c6d998982a7501c4f8557377a0add9c807c595831e8
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.736938123Z" level=info msg="Created container f1db270c13de780f6a205b6b4d186670276f05adeada73398e9bd6b30fd41e6a: kube-system/kube-controller-manager-pause-870778/kube-controller-manager" id=4ee29a19-56b5-459b-add4-d7e1a2fe187a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.738840981Z" level=info msg="Starting container: f1db270c13de780f6a205b6b4d186670276f05adeada73398e9bd6b30fd41e6a" id=849fc6e5-2d2b-4ef1-9229-6df0648d7d57 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.748857047Z" level=info msg="Started container" PID=2396 containerID=f1db270c13de780f6a205b6b4d186670276f05adeada73398e9bd6b30fd41e6a description=kube-system/kube-controller-manager-pause-870778/kube-controller-manager id=849fc6e5-2d2b-4ef1-9229-6df0648d7d57 name=/runtime.v1.RuntimeService/StartContainer sandboxID=962e5937049b3b13276548c5064f08d6543bcad7c21333f1a908b74e76bcdea2
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.862343943Z" level=info msg="Created container 4059e38191b26fe0e8a6fae7b8b3aa08c4fb288de2fed7b7b8c1d56b2fdf6ff0: kube-system/kube-proxy-x4dmw/kube-proxy" id=c5499f56-777d-4d97-8286-414b1aa15dc3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.86298877Z" level=info msg="Starting container: 4059e38191b26fe0e8a6fae7b8b3aa08c4fb288de2fed7b7b8c1d56b2fdf6ff0" id=cbbe9152-40a9-4689-a26d-7c2e3d3f0afa name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.866590029Z" level=info msg="Started container" PID=2378 containerID=4059e38191b26fe0e8a6fae7b8b3aa08c4fb288de2fed7b7b8c1d56b2fdf6ff0 description=kube-system/kube-proxy-x4dmw/kube-proxy id=cbbe9152-40a9-4689-a26d-7c2e3d3f0afa name=/runtime.v1.RuntimeService/StartContainer sandboxID=07f9c4fb2e4beedb8929c542cc23716f3f467460d36baf07a8297bda674a9762
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.007671112Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.012882331Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.012926729Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.012953264Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.017028364Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.017076668Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.017102875Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.021115329Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.021192154Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.021218402Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.024893041Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.024991503Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	f1db270c13de7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   23 seconds ago       Running             kube-controller-manager   1                   962e5937049b3       kube-controller-manager-pause-870778   kube-system
	4059e38191b26       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   23 seconds ago       Running             kube-proxy                1                   07f9c4fb2e4be       kube-proxy-x4dmw                       kube-system
	cb7ea64c57a6b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   23 seconds ago       Running             coredns                   1                   6392c90f6af90       coredns-66bc5c9577-j2chq               kube-system
	9a54382d4bc86       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   23 seconds ago       Running             kindnet-cni               1                   24c574eb8d980       kindnet-tljwg                          kube-system
	ae50feda76840       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   23 seconds ago       Running             coredns                   1                   d031a84b3b633       coredns-66bc5c9577-vhkhz               kube-system
	05bbf102a2113       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   23 seconds ago       Running             etcd                      1                   50ed380698d1c       etcd-pause-870778                      kube-system
	7abef40542740       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   24 seconds ago       Running             kube-apiserver            1                   3ef2cdbf8aead       kube-apiserver-pause-870778            kube-system
	453a3e3ee78d5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   24 seconds ago       Running             kube-scheduler            1                   95f499bdf08b8       kube-scheduler-pause-870778            kube-system
	36bd434b7df4f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   37 seconds ago       Exited              coredns                   0                   6392c90f6af90       coredns-66bc5c9577-j2chq               kube-system
	998613c05e7f1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   37 seconds ago       Exited              coredns                   0                   d031a84b3b633       coredns-66bc5c9577-vhkhz               kube-system
	3b392ff5a2e8e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   24c574eb8d980       kindnet-tljwg                          kube-system
	1fa43c29e5044       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   07f9c4fb2e4be       kube-proxy-x4dmw                       kube-system
	78a959960479c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   50ed380698d1c       etcd-pause-870778                      kube-system
	976c969aa054f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   3ef2cdbf8aead       kube-apiserver-pause-870778            kube-system
	7832a0d4d815d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   962e5937049b3       kube-controller-manager-pause-870778   kube-system
	6a93a6454e89d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   95f499bdf08b8       kube-scheduler-pause-870778            kube-system
	
	
	==> coredns [36bd434b7df4ff2386447f12fc15907a45580613a54171383ed220631e0a295b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42325 - 33273 "HINFO IN 289888803237167472.8511805586887677807. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.008637089s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [998613c05e7f15a32fb55e0bc139d53f8fefc8dfe93ddf08bb1d48367009bc13] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38578 - 49796 "HINFO IN 1497488450740844195.3611410341252859870. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014022943s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ae50feda76840007009f20128d2985fc95c60eb2bd7543095ac670363b69844c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38811 - 61888 "HINFO IN 8722966920593532517.6734627329511225537. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024005162s
	
	
	==> coredns [cb7ea64c57a6b2d64ce9ec1cd5c5305bb5160b5d51cdc02f56727cd3bc062e9f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36045 - 22332 "HINFO IN 761004753842047284.7834603073744702931. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00768399s
	
	
	==> describe nodes <==
	Name:               pause-870778
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-870778
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=pause-870778
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T19_34_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 19:34:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-870778
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:36:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:35:38 +0000   Thu, 16 Oct 2025 19:34:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:35:38 +0000   Thu, 16 Oct 2025 19:34:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:35:38 +0000   Thu, 16 Oct 2025 19:34:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 19:35:38 +0000   Thu, 16 Oct 2025 19:35:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-870778
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                37abcd1e-2ee0-4c68-904f-ac1f5cf6438e
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-j2chq                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     79s
	  kube-system                 coredns-66bc5c9577-vhkhz                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     79s
	  kube-system                 etcd-pause-870778                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         86s
	  kube-system                 kindnet-tljwg                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      80s
	  kube-system                 kube-apiserver-pause-870778             250m (12%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-controller-manager-pause-870778    200m (10%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-proxy-x4dmw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-scheduler-pause-870778             100m (5%)     0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 78s                kube-proxy       
	  Normal   Starting                 19s                kube-proxy       
	  Normal   NodeHasSufficientPID     93s (x8 over 93s)  kubelet          Node pause-870778 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 93s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  93s (x8 over 93s)  kubelet          Node pause-870778 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    93s (x8 over 93s)  kubelet          Node pause-870778 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 93s                kubelet          Starting kubelet.
	  Normal   Starting                 84s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 84s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  84s                kubelet          Node pause-870778 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    84s                kubelet          Node pause-870778 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     84s                kubelet          Node pause-870778 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           80s                node-controller  Node pause-870778 event: Registered Node pause-870778 in Controller
	  Normal   NodeReady                38s                kubelet          Node pause-870778 status is now: NodeReady
	  Normal   RegisteredNode           16s                node-controller  Node pause-870778 event: Registered Node pause-870778 in Controller
	
	
	==> dmesg <==
	[Oct16 18:59] overlayfs: idmapped layers are currently not supported
	[ +38.025144] overlayfs: idmapped layers are currently not supported
	[Oct16 19:08] overlayfs: idmapped layers are currently not supported
	[  +3.621058] overlayfs: idmapped layers are currently not supported
	[ +41.218849] overlayfs: idmapped layers are currently not supported
	[Oct16 19:09] overlayfs: idmapped layers are currently not supported
	[Oct16 19:11] overlayfs: idmapped layers are currently not supported
	[Oct16 19:16] overlayfs: idmapped layers are currently not supported
	[ +33.922450] overlayfs: idmapped layers are currently not supported
	[Oct16 19:18] overlayfs: idmapped layers are currently not supported
	[Oct16 19:19] overlayfs: idmapped layers are currently not supported
	[Oct16 19:20] overlayfs: idmapped layers are currently not supported
	[Oct16 19:21] overlayfs: idmapped layers are currently not supported
	[Oct16 19:22] overlayfs: idmapped layers are currently not supported
	[  +5.025487] overlayfs: idmapped layers are currently not supported
	[Oct16 19:23] overlayfs: idmapped layers are currently not supported
	[ +28.397927] overlayfs: idmapped layers are currently not supported
	[Oct16 19:24] overlayfs: idmapped layers are currently not supported
	[ +25.533019] overlayfs: idmapped layers are currently not supported
	[Oct16 19:26] overlayfs: idmapped layers are currently not supported
	[Oct16 19:27] overlayfs: idmapped layers are currently not supported
	[Oct16 19:29] overlayfs: idmapped layers are currently not supported
	[Oct16 19:31] overlayfs: idmapped layers are currently not supported
	[Oct16 19:32] overlayfs: idmapped layers are currently not supported
	[Oct16 19:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [05bbf102a21139c3005b2c4c4c00ba00d6bd04b54f8f16436a691c6a2bde8b9e] <==
	{"level":"warn","ts":"2025-10-16T19:35:54.998291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.054901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.097292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.108836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.166761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.214322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.251211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.274347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.316938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.350083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.389595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.418079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.441699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.471422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.509854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.542656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.570358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.590772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.619506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.657090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.694141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.734344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.805372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.835418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.949454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53380","server-name":"","error":"EOF"}
	
	
	==> etcd [78a959960479c52d4c849b6fa6022c2f23f915fb8f47d0dee2a3b13fbbd7af18] <==
	{"level":"warn","ts":"2025-10-16T19:34:47.389504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:34:47.398217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:34:47.421835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:34:47.456219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:34:47.474102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:34:47.506674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:34:47.590614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34616","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-16T19:35:44.122613Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-16T19:35:44.122655Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-870778","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-16T19:35:44.122729Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-16T19:35:44.277921Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-16T19:35:44.279403Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-16T19:35:44.279458Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-16T19:35:44.279536Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-16T19:35:44.279554Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-16T19:35:44.279539Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-16T19:35:44.279627Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-16T19:35:44.279660Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-16T19:35:44.279730Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-16T19:35:44.279748Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-16T19:35:44.279756Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-16T19:35:44.282831Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-16T19:35:44.283414Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-16T19:35:44.283499Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-16T19:35:44.283520Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-870778","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 19:36:16 up  2:18,  0 user,  load average: 2.60, 2.64, 2.34
	Linux pause-870778 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3b392ff5a2e8ee87e2387c57764ba62d125a51fdbb71404ec83edbfb827243a0] <==
	I1016 19:34:57.608017       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 19:34:57.609038       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1016 19:34:57.609257       1 main.go:148] setting mtu 1500 for CNI 
	I1016 19:34:57.609307       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 19:34:57.609325       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T19:34:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 19:34:57.809070       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 19:34:57.809095       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 19:34:57.809105       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 19:34:57.809857       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1016 19:35:27.809706       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1016 19:35:27.809900       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1016 19:35:27.809999       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1016 19:35:27.810074       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1016 19:35:29.009689       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 19:35:29.009724       1 metrics.go:72] Registering metrics
	I1016 19:35:29.009816       1 controller.go:711] "Syncing nftables rules"
	I1016 19:35:37.813204       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:35:37.813254       1 main.go:301] handling current node
	
	
	==> kindnet [9a54382d4bc8648852c26609dc83acf27dc0010c1d0d9f18fb11f136c720bd41] <==
	I1016 19:35:52.833429       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 19:35:52.834287       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1016 19:35:52.834493       1 main.go:148] setting mtu 1500 for CNI 
	I1016 19:35:52.866295       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 19:35:52.866838       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T19:35:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 19:35:53.007693       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 19:35:53.007773       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 19:35:53.007809       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 19:35:53.008301       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1016 19:35:57.209027       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 19:35:57.209085       1 metrics.go:72] Registering metrics
	I1016 19:35:57.209185       1 controller.go:711] "Syncing nftables rules"
	I1016 19:36:03.007191       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:36:03.007284       1 main.go:301] handling current node
	I1016 19:36:13.007991       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:36:13.008035       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7abef405427407987cdfc0d38c0f1eb915e50be06735d2c7f67e3abb3b179695] <==
	I1016 19:35:57.029813       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1016 19:35:57.030076       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1016 19:35:57.030154       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1016 19:35:57.037384       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1016 19:35:57.045427       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1016 19:35:57.046679       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1016 19:35:57.046737       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1016 19:35:57.066808       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 19:35:57.067167       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 19:35:57.100058       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1016 19:35:57.100544       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1016 19:35:57.101393       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1016 19:35:57.101514       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 19:35:57.102044       1 aggregator.go:171] initial CRD sync complete...
	I1016 19:35:57.102106       1 autoregister_controller.go:144] Starting autoregister controller
	I1016 19:35:57.102136       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 19:35:57.102192       1 cache.go:39] Caches are synced for autoregister controller
	I1016 19:35:57.103070       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1016 19:35:57.140203       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1016 19:35:57.725542       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 19:35:59.060261       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 19:36:00.442653       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 19:36:00.636995       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 19:36:00.686859       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 19:36:00.738780       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [976c969aa054f5536aeb2a392d0c178628ec9360569108fed110f8fd94bef670] <==
	W1016 19:35:44.136138       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.136191       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.136263       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.136328       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.137706       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.137889       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.138003       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.138115       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.138213       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.138296       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.138391       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.138485       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.138604       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.138688       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.138804       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.138938       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.139028       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.139133       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.139237       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.139345       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.139448       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.140461       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.140605       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.140672       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.142324       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7832a0d4d815d359d4874d18cd9c787088b0d8413ffd5918609a48296d38084e] <==
	I1016 19:34:56.551994       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1016 19:34:56.552070       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1016 19:34:56.552214       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1016 19:34:56.552320       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 19:34:56.552464       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 19:34:56.552522       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 19:34:56.553993       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1016 19:34:56.554500       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1016 19:34:56.554856       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1016 19:34:56.555458       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1016 19:34:56.555712       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1016 19:34:56.555814       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-870778"
	I1016 19:34:56.557342       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:34:56.557415       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1016 19:34:56.560978       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1016 19:34:56.561249       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1016 19:34:56.561317       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1016 19:34:56.561348       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1016 19:34:56.561376       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1016 19:34:56.572586       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-870778" podCIDRs=["10.244.0.0/24"]
	I1016 19:34:56.574199       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:34:56.574317       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 19:34:56.574359       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 19:34:56.574293       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1016 19:35:41.566630       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [f1db270c13de780f6a205b6b4d186670276f05adeada73398e9bd6b30fd41e6a] <==
	I1016 19:36:00.427361       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1016 19:36:00.429349       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1016 19:36:00.429926       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1016 19:36:00.430593       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 19:36:00.430893       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1016 19:36:00.430688       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1016 19:36:00.434680       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 19:36:00.436764       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1016 19:36:00.436974       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1016 19:36:00.437390       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1016 19:36:00.439859       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1016 19:36:00.440039       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:36:00.439870       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1016 19:36:00.440231       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1016 19:36:00.440261       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1016 19:36:00.440275       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1016 19:36:00.440282       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1016 19:36:00.442115       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1016 19:36:00.453494       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1016 19:36:00.453660       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:36:00.453670       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 19:36:00.453677       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 19:36:00.454988       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 19:36:00.460880       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:36:00.460999       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	
	
	==> kube-proxy [1fa43c29e504499b5777d8f02c5cdedd9d2cdae2c7b82bcc937a07f2ae00ef16] <==
	I1016 19:34:57.497438       1 server_linux.go:53] "Using iptables proxy"
	I1016 19:34:57.656236       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 19:34:57.757057       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 19:34:57.757405       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1016 19:34:57.757484       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 19:34:57.779532       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 19:34:57.779597       1 server_linux.go:132] "Using iptables Proxier"
	I1016 19:34:57.784147       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 19:34:57.784501       1 server.go:527] "Version info" version="v1.34.1"
	I1016 19:34:57.784640       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:34:57.786552       1 config.go:200] "Starting service config controller"
	I1016 19:34:57.786634       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 19:34:57.786677       1 config.go:106] "Starting endpoint slice config controller"
	I1016 19:34:57.786714       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 19:34:57.786749       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 19:34:57.786777       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 19:34:57.787584       1 config.go:309] "Starting node config controller"
	I1016 19:34:57.787652       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 19:34:57.787680       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 19:34:57.887775       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 19:34:57.887785       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 19:34:57.887803       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [4059e38191b26fe0e8a6fae7b8b3aa08c4fb288de2fed7b7b8c1d56b2fdf6ff0] <==
	I1016 19:35:53.858632       1 server_linux.go:53] "Using iptables proxy"
	I1016 19:35:54.908981       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 19:35:57.143982       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 19:35:57.144015       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1016 19:35:57.144149       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 19:35:57.251266       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 19:35:57.251330       1 server_linux.go:132] "Using iptables Proxier"
	I1016 19:35:57.265702       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 19:35:57.266062       1 server.go:527] "Version info" version="v1.34.1"
	I1016 19:35:57.266311       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:35:57.278564       1 config.go:200] "Starting service config controller"
	I1016 19:35:57.278675       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 19:35:57.278756       1 config.go:106] "Starting endpoint slice config controller"
	I1016 19:35:57.278806       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 19:35:57.278844       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 19:35:57.278849       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 19:35:57.279571       1 config.go:309] "Starting node config controller"
	I1016 19:35:57.279580       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 19:35:57.279586       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 19:35:57.381234       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 19:35:57.381287       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 19:35:57.381330       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [453a3e3ee78d58a74340babd2fbcac7b8e92bac974c0a00fe84180b09fcc04a5] <==
	I1016 19:35:55.882345       1 serving.go:386] Generated self-signed cert in-memory
	I1016 19:35:57.173083       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1016 19:35:57.173121       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:35:57.185464       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 19:35:57.185651       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1016 19:35:57.185709       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1016 19:35:57.185808       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 19:35:57.189728       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:35:57.189759       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:35:57.189779       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:35:57.189786       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:35:57.286411       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1016 19:35:57.291109       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:35:57.291273       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [6a93a6454e89deb75178b63bcad9e421253c8cf3ad8cd95dee098c421b8dd117] <==
	I1016 19:34:48.581847       1 serving.go:386] Generated self-signed cert in-memory
	I1016 19:34:50.860257       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1016 19:34:50.860355       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:34:50.866300       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1016 19:34:50.866425       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1016 19:34:50.866508       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:34:50.866542       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:34:50.866597       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:34:50.866629       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:34:50.866760       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 19:34:50.866839       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 19:34:50.967331       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:34:50.967448       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1016 19:34:50.967511       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:35:44.116825       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1016 19:35:44.116851       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1016 19:35:44.116870       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1016 19:35:44.116894       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:35:44.116911       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1016 19:35:44.116929       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:35:44.117439       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1016 19:35:44.117463       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 16 19:35:52 pause-870778 kubelet[1322]: I1016 19:35:52.439346    1322 scope.go:117] "RemoveContainer" containerID="998613c05e7f15a32fb55e0bc139d53f8fefc8dfe93ddf08bb1d48367009bc13"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.440404    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-870778\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="992c1732a06ea273ce94eac8d202f813" pod="kube-system/kube-apiserver-pause-870778"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.440827    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-870778\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="cc63652dc3f9bd697e0145e37fc17f48" pod="kube-system/kube-scheduler-pause-870778"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.441287    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x4dmw\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="2ee80808-9395-44ba-aeee-51c69c0b1f69" pod="kube-system/kube-proxy-x4dmw"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.441611    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-tljwg\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1f3c571f-2279-4d82-af72-febc2dd3f054" pod="kube-system/kindnet-tljwg"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.441924    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-vhkhz\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a0654543-a145-4d72-961a-72e07066dcf9" pod="kube-system/coredns-66bc5c9577-vhkhz"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.442233    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-870778\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="35a8880361f76a34123415ec35118bfd" pod="kube-system/etcd-pause-870778"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.442898    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-j2chq\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5654ae7b-c8b7-43ca-a406-a2b469ab6a89" pod="kube-system/coredns-66bc5c9577-j2chq"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: I1016 19:35:52.463982    1322 scope.go:117] "RemoveContainer" containerID="7832a0d4d815d359d4874d18cd9c787088b0d8413ffd5918609a48296d38084e"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.464734    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-870778\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="992c1732a06ea273ce94eac8d202f813" pod="kube-system/kube-apiserver-pause-870778"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.465035    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-870778\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="cc63652dc3f9bd697e0145e37fc17f48" pod="kube-system/kube-scheduler-pause-870778"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.465276    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x4dmw\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="2ee80808-9395-44ba-aeee-51c69c0b1f69" pod="kube-system/kube-proxy-x4dmw"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.465511    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-tljwg\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1f3c571f-2279-4d82-af72-febc2dd3f054" pod="kube-system/kindnet-tljwg"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.465724    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-vhkhz\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a0654543-a145-4d72-961a-72e07066dcf9" pod="kube-system/coredns-66bc5c9577-vhkhz"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.465955    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-870778\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="35a8880361f76a34123415ec35118bfd" pod="kube-system/etcd-pause-870778"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.466174    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-870778\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="f1bf67ee649a97f1300a4eda63a3b1cc" pod="kube-system/kube-controller-manager-pause-870778"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.466378    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-j2chq\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5654ae7b-c8b7-43ca-a406-a2b469ab6a89" pod="kube-system/coredns-66bc5c9577-j2chq"
	Oct 16 19:35:56 pause-870778 kubelet[1322]: E1016 19:35:56.868652    1322 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-870778\" is forbidden: User \"system:node:pause-870778\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-870778' and this object" podUID="35a8880361f76a34123415ec35118bfd" pod="kube-system/etcd-pause-870778"
	Oct 16 19:35:56 pause-870778 kubelet[1322]: E1016 19:35:56.870390    1322 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-870778\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-870778' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 16 19:35:56 pause-870778 kubelet[1322]: E1016 19:35:56.873113    1322 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-870778\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-870778' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 16 19:35:56 pause-870778 kubelet[1322]: E1016 19:35:56.873252    1322 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-870778\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-870778' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 16 19:35:56 pause-870778 kubelet[1322]: E1016 19:35:56.921474    1322 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-870778\" is forbidden: User \"system:node:pause-870778\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-870778' and this object" podUID="f1bf67ee649a97f1300a4eda63a3b1cc" pod="kube-system/kube-controller-manager-pause-870778"
	Oct 16 19:36:13 pause-870778 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 19:36:13 pause-870778 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 19:36:13 pause-870778 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-870778 -n pause-870778
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-870778 -n pause-870778: exit status 2 (369.318569ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-870778 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-870778
helpers_test.go:243: (dbg) docker inspect pause-870778:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37c92bc7f6ae118aaf3fc148d7153ec9e03d6b90e4d3b23269f1a399bcf88b8d",
	        "Created": "2025-10-16T19:34:22.706072665Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 446072,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T19:34:22.771344908Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/37c92bc7f6ae118aaf3fc148d7153ec9e03d6b90e4d3b23269f1a399bcf88b8d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37c92bc7f6ae118aaf3fc148d7153ec9e03d6b90e4d3b23269f1a399bcf88b8d/hostname",
	        "HostsPath": "/var/lib/docker/containers/37c92bc7f6ae118aaf3fc148d7153ec9e03d6b90e4d3b23269f1a399bcf88b8d/hosts",
	        "LogPath": "/var/lib/docker/containers/37c92bc7f6ae118aaf3fc148d7153ec9e03d6b90e4d3b23269f1a399bcf88b8d/37c92bc7f6ae118aaf3fc148d7153ec9e03d6b90e4d3b23269f1a399bcf88b8d-json.log",
	        "Name": "/pause-870778",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-870778:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-870778",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37c92bc7f6ae118aaf3fc148d7153ec9e03d6b90e4d3b23269f1a399bcf88b8d",
	                "LowerDir": "/var/lib/docker/overlay2/b235b7e3599d0f4598d94c98606271186ece95a3a8dc18fc845bcbaf34b7162a-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b235b7e3599d0f4598d94c98606271186ece95a3a8dc18fc845bcbaf34b7162a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b235b7e3599d0f4598d94c98606271186ece95a3a8dc18fc845bcbaf34b7162a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b235b7e3599d0f4598d94c98606271186ece95a3a8dc18fc845bcbaf34b7162a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-870778",
	                "Source": "/var/lib/docker/volumes/pause-870778/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-870778",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-870778",
	                "name.minikube.sigs.k8s.io": "pause-870778",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b7a63c89c3a13182aef061ac751de4e017fc71f001e1cee9b705ed66aa923669",
	            "SandboxKey": "/var/run/docker/netns/b7a63c89c3a1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33388"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33389"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33392"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33390"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33391"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-870778": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:2d:27:9f:bc:15",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4da9b81e2b1a27078af3212586cff121719d63118f6ac0b3eb53ba67d200358c",
	                    "EndpointID": "f8902a2124156a79d6637080bb6327a024d8311c9d01e7703080500bd7e201f5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-870778",
	                        "37c92bc7f6ae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-870778 -n pause-870778
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-870778 -n pause-870778: exit status 2 (354.962506ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-870778 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-870778 logs -n 25: (1.805040837s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-204009 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-204009       │ jenkins │ v1.37.0 │ 16 Oct 25 19:29 UTC │ 16 Oct 25 19:30 UTC │
	│ start   │ -p missing-upgrade-153120 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-153120    │ jenkins │ v1.32.0 │ 16 Oct 25 19:29 UTC │ 16 Oct 25 19:30 UTC │
	│ start   │ -p NoKubernetes-204009 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-204009       │ jenkins │ v1.37.0 │ 16 Oct 25 19:30 UTC │ 16 Oct 25 19:31 UTC │
	│ start   │ -p missing-upgrade-153120 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-153120    │ jenkins │ v1.37.0 │ 16 Oct 25 19:30 UTC │ 16 Oct 25 19:31 UTC │
	│ delete  │ -p missing-upgrade-153120                                                                                                                │ missing-upgrade-153120    │ jenkins │ v1.37.0 │ 16 Oct 25 19:31 UTC │ 16 Oct 25 19:31 UTC │
	│ start   │ -p kubernetes-upgrade-627378 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-627378 │ jenkins │ v1.37.0 │ 16 Oct 25 19:31 UTC │ 16 Oct 25 19:31 UTC │
	│ delete  │ -p NoKubernetes-204009                                                                                                                   │ NoKubernetes-204009       │ jenkins │ v1.37.0 │ 16 Oct 25 19:31 UTC │ 16 Oct 25 19:31 UTC │
	│ start   │ -p NoKubernetes-204009 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-204009       │ jenkins │ v1.37.0 │ 16 Oct 25 19:31 UTC │ 16 Oct 25 19:32 UTC │
	│ stop    │ -p kubernetes-upgrade-627378                                                                                                             │ kubernetes-upgrade-627378 │ jenkins │ v1.37.0 │ 16 Oct 25 19:31 UTC │ 16 Oct 25 19:32 UTC │
	│ start   │ -p kubernetes-upgrade-627378 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-627378 │ jenkins │ v1.37.0 │ 16 Oct 25 19:32 UTC │                     │
	│ ssh     │ -p NoKubernetes-204009 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-204009       │ jenkins │ v1.37.0 │ 16 Oct 25 19:32 UTC │                     │
	│ stop    │ -p NoKubernetes-204009                                                                                                                   │ NoKubernetes-204009       │ jenkins │ v1.37.0 │ 16 Oct 25 19:32 UTC │ 16 Oct 25 19:32 UTC │
	│ start   │ -p NoKubernetes-204009 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-204009       │ jenkins │ v1.37.0 │ 16 Oct 25 19:32 UTC │ 16 Oct 25 19:32 UTC │
	│ ssh     │ -p NoKubernetes-204009 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-204009       │ jenkins │ v1.37.0 │ 16 Oct 25 19:32 UTC │                     │
	│ delete  │ -p NoKubernetes-204009                                                                                                                   │ NoKubernetes-204009       │ jenkins │ v1.37.0 │ 16 Oct 25 19:32 UTC │ 16 Oct 25 19:32 UTC │
	│ start   │ -p stopped-upgrade-284470 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-284470    │ jenkins │ v1.32.0 │ 16 Oct 25 19:32 UTC │ 16 Oct 25 19:32 UTC │
	│ stop    │ stopped-upgrade-284470 stop                                                                                                              │ stopped-upgrade-284470    │ jenkins │ v1.32.0 │ 16 Oct 25 19:32 UTC │ 16 Oct 25 19:32 UTC │
	│ start   │ -p stopped-upgrade-284470 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-284470    │ jenkins │ v1.37.0 │ 16 Oct 25 19:32 UTC │ 16 Oct 25 19:33 UTC │
	│ delete  │ -p stopped-upgrade-284470                                                                                                                │ stopped-upgrade-284470    │ jenkins │ v1.37.0 │ 16 Oct 25 19:33 UTC │ 16 Oct 25 19:33 UTC │
	│ start   │ -p running-upgrade-779500 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-779500    │ jenkins │ v1.32.0 │ 16 Oct 25 19:33 UTC │ 16 Oct 25 19:33 UTC │
	│ start   │ -p running-upgrade-779500 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-779500    │ jenkins │ v1.37.0 │ 16 Oct 25 19:33 UTC │ 16 Oct 25 19:34 UTC │
	│ delete  │ -p running-upgrade-779500                                                                                                                │ running-upgrade-779500    │ jenkins │ v1.37.0 │ 16 Oct 25 19:34 UTC │ 16 Oct 25 19:34 UTC │
	│ start   │ -p pause-870778 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-870778              │ jenkins │ v1.37.0 │ 16 Oct 25 19:34 UTC │ 16 Oct 25 19:35 UTC │
	│ start   │ -p pause-870778 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-870778              │ jenkins │ v1.37.0 │ 16 Oct 25 19:35 UTC │ 16 Oct 25 19:36 UTC │
	│ pause   │ -p pause-870778 --alsologtostderr -v=5                                                                                                   │ pause-870778              │ jenkins │ v1.37.0 │ 16 Oct 25 19:36 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 19:35:41
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 19:35:41.622118  449947 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:35:41.622259  449947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:35:41.622269  449947 out.go:374] Setting ErrFile to fd 2...
	I1016 19:35:41.622274  449947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:35:41.622541  449947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:35:41.622906  449947 out.go:368] Setting JSON to false
	I1016 19:35:41.623864  449947 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8271,"bootTime":1760635071,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 19:35:41.623933  449947 start.go:141] virtualization:  
	I1016 19:35:41.627161  449947 out.go:179] * [pause-870778] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 19:35:41.631008  449947 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 19:35:41.631235  449947 notify.go:220] Checking for updates...
	I1016 19:35:41.636655  449947 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 19:35:41.639395  449947 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:35:41.642316  449947 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 19:35:41.645201  449947 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 19:35:41.648258  449947 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 19:35:41.651752  449947 config.go:182] Loaded profile config "pause-870778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:35:41.652366  449947 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 19:35:41.685309  449947 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 19:35:41.685424  449947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:35:41.749942  449947 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-16 19:35:41.740158242 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:35:41.750059  449947 docker.go:318] overlay module found
	I1016 19:35:41.753254  449947 out.go:179] * Using the docker driver based on existing profile
	I1016 19:35:41.756074  449947 start.go:305] selected driver: docker
	I1016 19:35:41.756098  449947 start.go:925] validating driver "docker" against &{Name:pause-870778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-870778 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:35:41.756223  449947 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 19:35:41.756344  449947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:35:41.881241  449947 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-16 19:35:41.865298543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:35:41.881638  449947 cni.go:84] Creating CNI manager for ""
	I1016 19:35:41.881698  449947 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:35:41.881748  449947 start.go:349] cluster config:
	{Name:pause-870778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-870778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:35:41.886831  449947 out.go:179] * Starting "pause-870778" primary control-plane node in "pause-870778" cluster
	I1016 19:35:41.889453  449947 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 19:35:41.892220  449947 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 19:35:41.895144  449947 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:35:41.895195  449947 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 19:35:41.895206  449947 cache.go:58] Caching tarball of preloaded images
	I1016 19:35:41.895303  449947 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 19:35:41.895312  449947 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 19:35:41.895460  449947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/config.json ...
	I1016 19:35:41.895685  449947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 19:35:41.923094  449947 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 19:35:41.923145  449947 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 19:35:41.923180  449947 cache.go:232] Successfully downloaded all kic artifacts
	I1016 19:35:41.923269  449947 start.go:360] acquireMachinesLock for pause-870778: {Name:mk8801ea66fe5ad45547bf1c2262db986babd029 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:35:41.923434  449947 start.go:364] duration metric: took 119.984µs to acquireMachinesLock for "pause-870778"
	I1016 19:35:41.923462  449947 start.go:96] Skipping create...Using existing machine configuration
	I1016 19:35:41.923473  449947 fix.go:54] fixHost starting: 
	I1016 19:35:41.923844  449947 cli_runner.go:164] Run: docker container inspect pause-870778 --format={{.State.Status}}
	I1016 19:35:41.946179  449947 fix.go:112] recreateIfNeeded on pause-870778: state=Running err=<nil>
	W1016 19:35:41.946218  449947 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 19:35:41.777314  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:35:41.777710  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:35:41.777749  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:35:41.777803  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:35:41.807264  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:41.807284  432201 cri.go:89] found id: ""
	I1016 19:35:41.807293  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:35:41.807376  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:41.817931  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:35:41.818031  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:35:41.859839  432201 cri.go:89] found id: ""
	I1016 19:35:41.859869  432201 logs.go:282] 0 containers: []
	W1016 19:35:41.859878  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:35:41.859884  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:35:41.859942  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:35:41.906359  432201 cri.go:89] found id: ""
	I1016 19:35:41.906379  432201 logs.go:282] 0 containers: []
	W1016 19:35:41.906388  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:35:41.906395  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:35:41.906453  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:35:41.945382  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:41.945401  432201 cri.go:89] found id: ""
	I1016 19:35:41.945410  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:35:41.945465  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:41.953660  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:35:41.953731  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 19:35:41.998843  432201 cri.go:89] found id: ""
	I1016 19:35:41.998869  432201 logs.go:282] 0 containers: []
	W1016 19:35:41.998878  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:35:41.998884  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:35:41.998948  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:35:42.047594  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:42.047620  432201 cri.go:89] found id: "cf52722920cef7f69cfaf4c84f3e09114fc0e90b212c53311a54627e756ba375"
	I1016 19:35:42.047626  432201 cri.go:89] found id: ""
	I1016 19:35:42.047638  432201 logs.go:282] 2 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88 cf52722920cef7f69cfaf4c84f3e09114fc0e90b212c53311a54627e756ba375]
	I1016 19:35:42.047703  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:42.052591  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:42.057532  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:35:42.057605  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:35:42.103677  432201 cri.go:89] found id: ""
	I1016 19:35:42.103702  432201 logs.go:282] 0 containers: []
	W1016 19:35:42.103711  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:35:42.103718  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:35:42.103793  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:35:42.142321  432201 cri.go:89] found id: ""
	I1016 19:35:42.142350  432201 logs.go:282] 0 containers: []
	W1016 19:35:42.142361  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:35:42.142380  432201 logs.go:123] Gathering logs for kubelet ...
	I1016 19:35:42.142394  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 19:35:42.305310  432201 logs.go:123] Gathering logs for dmesg ...
	I1016 19:35:42.305349  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 19:35:42.324886  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:35:42.324935  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:42.403535  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:35:42.403573  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:42.439583  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:35:42.439611  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 19:35:42.508812  432201 logs.go:123] Gathering logs for container status ...
	I1016 19:35:42.508854  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 19:35:42.550392  432201 logs.go:123] Gathering logs for describe nodes ...
	I1016 19:35:42.550427  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 19:35:42.652954  432201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 19:35:42.652977  432201 logs.go:123] Gathering logs for kube-apiserver [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d] ...
	I1016 19:35:42.652990  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:42.694139  432201 logs.go:123] Gathering logs for kube-controller-manager [cf52722920cef7f69cfaf4c84f3e09114fc0e90b212c53311a54627e756ba375] ...
	I1016 19:35:42.694224  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cf52722920cef7f69cfaf4c84f3e09114fc0e90b212c53311a54627e756ba375"
	I1016 19:35:45.224126  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:35:45.224607  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:35:45.224655  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:35:45.224724  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:35:45.261763  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:45.261788  432201 cri.go:89] found id: ""
	I1016 19:35:45.261797  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:35:45.261868  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:45.267070  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:35:45.267158  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:35:41.949324  449947 out.go:252] * Updating the running docker "pause-870778" container ...
	I1016 19:35:41.949361  449947 machine.go:93] provisionDockerMachine start ...
	I1016 19:35:41.949452  449947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-870778
	I1016 19:35:41.974414  449947 main.go:141] libmachine: Using SSH client type: native
	I1016 19:35:41.974744  449947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1016 19:35:41.974760  449947 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 19:35:42.154552  449947 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-870778
	
	I1016 19:35:42.154585  449947 ubuntu.go:182] provisioning hostname "pause-870778"
	I1016 19:35:42.154665  449947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-870778
	I1016 19:35:42.187360  449947 main.go:141] libmachine: Using SSH client type: native
	I1016 19:35:42.187676  449947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1016 19:35:42.187689  449947 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-870778 && echo "pause-870778" | sudo tee /etc/hostname
	I1016 19:35:42.407664  449947 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-870778
	
	I1016 19:35:42.407738  449947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-870778
	I1016 19:35:42.428714  449947 main.go:141] libmachine: Using SSH client type: native
	I1016 19:35:42.429032  449947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1016 19:35:42.429048  449947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-870778' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-870778/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-870778' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 19:35:42.594489  449947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 19:35:42.594575  449947 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 19:35:42.594633  449947 ubuntu.go:190] setting up certificates
	I1016 19:35:42.594665  449947 provision.go:84] configureAuth start
	I1016 19:35:42.594760  449947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-870778
	I1016 19:35:42.614974  449947 provision.go:143] copyHostCerts
	I1016 19:35:42.615046  449947 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 19:35:42.615062  449947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 19:35:42.615139  449947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 19:35:42.615244  449947 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 19:35:42.615249  449947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 19:35:42.615276  449947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 19:35:42.615333  449947 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 19:35:42.615338  449947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 19:35:42.615360  449947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 19:35:42.615438  449947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.pause-870778 san=[127.0.0.1 192.168.76.2 localhost minikube pause-870778]
	I1016 19:35:43.749086  449947 provision.go:177] copyRemoteCerts
	I1016 19:35:43.749177  449947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 19:35:43.749220  449947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-870778
	I1016 19:35:43.768085  449947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/pause-870778/id_rsa Username:docker}
	I1016 19:35:43.873030  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 19:35:43.893687  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1016 19:35:43.911395  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 19:35:43.930398  449947 provision.go:87] duration metric: took 1.335694457s to configureAuth
	I1016 19:35:43.930442  449947 ubuntu.go:206] setting minikube options for container-runtime
	I1016 19:35:43.930660  449947 config.go:182] Loaded profile config "pause-870778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:35:43.930768  449947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-870778
	I1016 19:35:43.948085  449947 main.go:141] libmachine: Using SSH client type: native
	I1016 19:35:43.948401  449947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1016 19:35:43.948423  449947 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 19:35:45.312809  432201 cri.go:89] found id: ""
	I1016 19:35:45.312909  432201 logs.go:282] 0 containers: []
	W1016 19:35:45.312925  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:35:45.312942  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:35:45.313270  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:35:45.379519  432201 cri.go:89] found id: ""
	I1016 19:35:45.379543  432201 logs.go:282] 0 containers: []
	W1016 19:35:45.379552  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:35:45.379560  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:35:45.379629  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:35:45.416624  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:45.416648  432201 cri.go:89] found id: ""
	I1016 19:35:45.416657  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:35:45.416747  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:45.421036  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:35:45.421116  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 19:35:45.452632  432201 cri.go:89] found id: ""
	I1016 19:35:45.452655  432201 logs.go:282] 0 containers: []
	W1016 19:35:45.452665  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:35:45.452671  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:35:45.452729  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:35:45.482130  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:45.482154  432201 cri.go:89] found id: ""
	I1016 19:35:45.482164  432201 logs.go:282] 1 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88]
	I1016 19:35:45.482224  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:45.486221  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:35:45.486298  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:35:45.512496  432201 cri.go:89] found id: ""
	I1016 19:35:45.512561  432201 logs.go:282] 0 containers: []
	W1016 19:35:45.512584  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:35:45.512607  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:35:45.512684  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:35:45.540495  432201 cri.go:89] found id: ""
	I1016 19:35:45.540570  432201 logs.go:282] 0 containers: []
	W1016 19:35:45.540592  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:35:45.540617  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:35:45.540643  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:45.615683  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:35:45.615763  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:45.642368  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:35:45.642398  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 19:35:45.702034  432201 logs.go:123] Gathering logs for container status ...
	I1016 19:35:45.702070  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 19:35:45.733189  432201 logs.go:123] Gathering logs for kubelet ...
	I1016 19:35:45.733221  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 19:35:45.846088  432201 logs.go:123] Gathering logs for dmesg ...
	I1016 19:35:45.846130  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 19:35:45.862980  432201 logs.go:123] Gathering logs for describe nodes ...
	I1016 19:35:45.863011  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 19:35:45.938988  432201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 19:35:45.939013  432201 logs.go:123] Gathering logs for kube-apiserver [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d] ...
	I1016 19:35:45.939026  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:48.474691  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:35:48.475169  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:35:48.475234  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:35:48.475313  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:35:48.502221  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:48.502252  432201 cri.go:89] found id: ""
	I1016 19:35:48.502262  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:35:48.502321  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:48.505893  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:35:48.505965  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:35:48.531717  432201 cri.go:89] found id: ""
	I1016 19:35:48.531744  432201 logs.go:282] 0 containers: []
	W1016 19:35:48.531753  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:35:48.531760  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:35:48.531817  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:35:48.558828  432201 cri.go:89] found id: ""
	I1016 19:35:48.558853  432201 logs.go:282] 0 containers: []
	W1016 19:35:48.558868  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:35:48.558875  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:35:48.558934  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:35:48.586465  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:48.586489  432201 cri.go:89] found id: ""
	I1016 19:35:48.586498  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:35:48.586557  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:48.591303  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:35:48.591450  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 19:35:48.618063  432201 cri.go:89] found id: ""
	I1016 19:35:48.618089  432201 logs.go:282] 0 containers: []
	W1016 19:35:48.618098  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:35:48.618104  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:35:48.618161  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:35:48.648087  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:48.648110  432201 cri.go:89] found id: ""
	I1016 19:35:48.648118  432201 logs.go:282] 1 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88]
	I1016 19:35:48.648181  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:48.651773  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:35:48.651851  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:35:48.677357  432201 cri.go:89] found id: ""
	I1016 19:35:48.677382  432201 logs.go:282] 0 containers: []
	W1016 19:35:48.677390  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:35:48.677396  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:35:48.677454  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:35:48.703982  432201 cri.go:89] found id: ""
	I1016 19:35:48.704007  432201 logs.go:282] 0 containers: []
	W1016 19:35:48.704015  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:35:48.704025  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:35:48.704039  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:48.729155  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:35:48.729182  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 19:35:48.787930  432201 logs.go:123] Gathering logs for container status ...
	I1016 19:35:48.787968  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 19:35:48.822465  432201 logs.go:123] Gathering logs for kubelet ...
	I1016 19:35:48.822496  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 19:35:48.946307  432201 logs.go:123] Gathering logs for dmesg ...
	I1016 19:35:48.946342  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 19:35:48.962419  432201 logs.go:123] Gathering logs for describe nodes ...
	I1016 19:35:48.962447  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 19:35:49.030538  432201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 19:35:49.030562  432201 logs.go:123] Gathering logs for kube-apiserver [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d] ...
	I1016 19:35:49.030576  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:49.063377  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:35:49.063409  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:49.299882  449947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 19:35:49.299907  449947 machine.go:96] duration metric: took 7.350537964s to provisionDockerMachine
	I1016 19:35:49.299918  449947 start.go:293] postStartSetup for "pause-870778" (driver="docker")
	I1016 19:35:49.299929  449947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 19:35:49.299995  449947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 19:35:49.300050  449947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-870778
	I1016 19:35:49.318048  449947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/pause-870778/id_rsa Username:docker}
	I1016 19:35:49.421159  449947 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 19:35:49.424478  449947 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 19:35:49.424508  449947 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 19:35:49.424519  449947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 19:35:49.424576  449947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 19:35:49.424673  449947 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 19:35:49.424785  449947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 19:35:49.432562  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:35:49.451159  449947 start.go:296] duration metric: took 151.224637ms for postStartSetup
	I1016 19:35:49.451236  449947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 19:35:49.451300  449947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-870778
	I1016 19:35:49.467931  449947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/pause-870778/id_rsa Username:docker}
	I1016 19:35:49.566683  449947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 19:35:49.572067  449947 fix.go:56] duration metric: took 7.648586184s for fixHost
	I1016 19:35:49.572095  449947 start.go:83] releasing machines lock for "pause-870778", held for 7.648645188s
	I1016 19:35:49.572181  449947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-870778
	I1016 19:35:49.589305  449947 ssh_runner.go:195] Run: cat /version.json
	I1016 19:35:49.589362  449947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-870778
	I1016 19:35:49.589654  449947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 19:35:49.589725  449947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-870778
	I1016 19:35:49.611923  449947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/pause-870778/id_rsa Username:docker}
	I1016 19:35:49.622807  449947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/pause-870778/id_rsa Username:docker}
	I1016 19:35:49.716947  449947 ssh_runner.go:195] Run: systemctl --version
	I1016 19:35:49.807704  449947 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 19:35:49.851727  449947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 19:35:49.856217  449947 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 19:35:49.856366  449947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 19:35:49.864764  449947 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 19:35:49.864798  449947 start.go:495] detecting cgroup driver to use...
	I1016 19:35:49.864831  449947 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 19:35:49.864884  449947 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 19:35:49.880852  449947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 19:35:49.894240  449947 docker.go:218] disabling cri-docker service (if available) ...
	I1016 19:35:49.894396  449947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 19:35:49.910817  449947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 19:35:49.924618  449947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 19:35:50.075508  449947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 19:35:50.241580  449947 docker.go:234] disabling docker service ...
	I1016 19:35:50.241683  449947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 19:35:50.257922  449947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 19:35:50.272532  449947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 19:35:50.428137  449947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 19:35:50.590140  449947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 19:35:50.603243  449947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 19:35:50.619195  449947 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 19:35:50.619311  449947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:35:50.628109  449947 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 19:35:50.628227  449947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:35:50.640870  449947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:35:50.650081  449947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:35:50.659301  449947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 19:35:50.667599  449947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:35:50.676818  449947 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:35:50.685486  449947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:35:50.694628  449947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 19:35:50.702265  449947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 19:35:50.709809  449947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:35:50.862186  449947 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 19:35:51.049477  449947 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 19:35:51.049570  449947 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 19:35:51.053589  449947 start.go:563] Will wait 60s for crictl version
	I1016 19:35:51.053688  449947 ssh_runner.go:195] Run: which crictl
	I1016 19:35:51.057170  449947 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 19:35:51.082811  449947 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 19:35:51.082915  449947 ssh_runner.go:195] Run: crio --version
	I1016 19:35:51.117084  449947 ssh_runner.go:195] Run: crio --version
	I1016 19:35:51.154762  449947 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 19:35:51.157815  449947 cli_runner.go:164] Run: docker network inspect pause-870778 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:35:51.175734  449947 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1016 19:35:51.180707  449947 kubeadm.go:883] updating cluster {Name:pause-870778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-870778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 19:35:51.180876  449947 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:35:51.180939  449947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:35:51.218820  449947 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:35:51.218845  449947 crio.go:433] Images already preloaded, skipping extraction
	I1016 19:35:51.218903  449947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:35:51.248697  449947 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:35:51.248723  449947 cache_images.go:85] Images are preloaded, skipping loading
	I1016 19:35:51.248731  449947 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1016 19:35:51.248840  449947 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-870778 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-870778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 19:35:51.248920  449947 ssh_runner.go:195] Run: crio config
	I1016 19:35:51.328067  449947 cni.go:84] Creating CNI manager for ""
	I1016 19:35:51.328153  449947 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:35:51.328189  449947 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 19:35:51.328240  449947 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-870778 NodeName:pause-870778 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 19:35:51.328416  449947 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-870778"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 19:35:51.328533  449947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 19:35:51.336550  449947 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 19:35:51.336622  449947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 19:35:51.344293  449947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1016 19:35:51.360660  449947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 19:35:51.374247  449947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1016 19:35:51.387691  449947 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1016 19:35:51.391674  449947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:35:51.533536  449947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:35:51.547239  449947 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778 for IP: 192.168.76.2
	I1016 19:35:51.547264  449947 certs.go:195] generating shared ca certs ...
	I1016 19:35:51.547280  449947 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:35:51.547433  449947 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 19:35:51.547482  449947 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 19:35:51.547497  449947 certs.go:257] generating profile certs ...
	I1016 19:35:51.547612  449947 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/client.key
	I1016 19:35:51.547690  449947 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/apiserver.key.3ad9919e
	I1016 19:35:51.547738  449947 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/proxy-client.key
	I1016 19:35:51.547852  449947 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 19:35:51.547884  449947 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 19:35:51.547897  449947 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 19:35:51.547926  449947 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 19:35:51.547958  449947 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 19:35:51.547984  449947 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 19:35:51.548027  449947 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:35:51.548712  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 19:35:51.567701  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 19:35:51.586325  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 19:35:51.604435  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 19:35:51.622964  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1016 19:35:51.648002  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 19:35:51.674689  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 19:35:51.692706  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1016 19:35:51.721327  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 19:35:51.740229  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 19:35:51.761674  449947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 19:35:51.789808  449947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 19:35:51.808309  449947 ssh_runner.go:195] Run: openssl version
	I1016 19:35:51.815974  449947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 19:35:51.830970  449947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 19:35:51.838142  449947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 19:35:51.838263  449947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 19:35:51.884700  449947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 19:35:51.894427  449947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 19:35:51.904806  449947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 19:35:51.909986  449947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 19:35:51.910118  449947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 19:35:51.954082  449947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 19:35:51.966072  449947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 19:35:51.978962  449947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:35:51.983067  449947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:35:51.983138  449947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:35:52.025700  449947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 19:35:52.034384  449947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 19:35:52.038932  449947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 19:35:52.087478  449947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 19:35:52.138215  449947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 19:35:52.183882  449947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 19:35:52.229427  449947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 19:35:52.293253  449947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 19:35:52.459350  449947 kubeadm.go:400] StartCluster: {Name:pause-870778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-870778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:35:52.459474  449947 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 19:35:52.459532  449947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 19:35:52.702295  449947 cri.go:89] found id: "cb7ea64c57a6b2d64ce9ec1cd5c5305bb5160b5d51cdc02f56727cd3bc062e9f"
	I1016 19:35:52.702318  449947 cri.go:89] found id: "9a54382d4bc8648852c26609dc83acf27dc0010c1d0d9f18fb11f136c720bd41"
	I1016 19:35:52.702323  449947 cri.go:89] found id: "ae50feda76840007009f20128d2985fc95c60eb2bd7543095ac670363b69844c"
	I1016 19:35:52.702327  449947 cri.go:89] found id: "7abef405427407987cdfc0d38c0f1eb915e50be06735d2c7f67e3abb3b179695"
	I1016 19:35:52.702350  449947 cri.go:89] found id: "453a3e3ee78d58a74340babd2fbcac7b8e92bac974c0a00fe84180b09fcc04a5"
	I1016 19:35:52.702355  449947 cri.go:89] found id: "36bd434b7df4ff2386447f12fc15907a45580613a54171383ed220631e0a295b"
	I1016 19:35:52.702362  449947 cri.go:89] found id: "998613c05e7f15a32fb55e0bc139d53f8fefc8dfe93ddf08bb1d48367009bc13"
	I1016 19:35:52.702366  449947 cri.go:89] found id: "3b392ff5a2e8ee87e2387c57764ba62d125a51fdbb71404ec83edbfb827243a0"
	I1016 19:35:52.702369  449947 cri.go:89] found id: "1fa43c29e504499b5777d8f02c5cdedd9d2cdae2c7b82bcc937a07f2ae00ef16"
	I1016 19:35:52.702380  449947 cri.go:89] found id: "78a959960479c52d4c849b6fa6022c2f23f915fb8f47d0dee2a3b13fbbd7af18"
	I1016 19:35:52.702386  449947 cri.go:89] found id: "976c969aa054f5536aeb2a392d0c178628ec9360569108fed110f8fd94bef670"
	I1016 19:35:52.702390  449947 cri.go:89] found id: "7832a0d4d815d359d4874d18cd9c787088b0d8413ffd5918609a48296d38084e"
	I1016 19:35:52.702393  449947 cri.go:89] found id: "6a93a6454e89deb75178b63bcad9e421253c8cf3ad8cd95dee098c421b8dd117"
	I1016 19:35:52.702396  449947 cri.go:89] found id: ""
	I1016 19:35:52.702446  449947 ssh_runner.go:195] Run: sudo runc list -f json
	W1016 19:35:52.738625  449947 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:35:52Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:35:52.738698  449947 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 19:35:52.757284  449947 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 19:35:52.757305  449947 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 19:35:52.757353  449947 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 19:35:52.769802  449947 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 19:35:52.775472  449947 kubeconfig.go:125] found "pause-870778" server: "https://192.168.76.2:8443"
	I1016 19:35:52.776564  449947 kapi.go:59] client config for pause-870778: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/client.key", CAFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1016 19:35:52.777099  449947 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1016 19:35:52.777120  449947 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1016 19:35:52.777149  449947 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1016 19:35:52.777159  449947 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1016 19:35:52.777164  449947 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1016 19:35:52.777481  449947 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 19:35:52.791287  449947 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1016 19:35:52.791322  449947 kubeadm.go:601] duration metric: took 34.010854ms to restartPrimaryControlPlane
	I1016 19:35:52.791332  449947 kubeadm.go:402] duration metric: took 331.991514ms to StartCluster
	I1016 19:35:52.791348  449947 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:35:52.791413  449947 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:35:52.792349  449947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:35:52.792564  449947 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:35:52.792877  449947 config.go:182] Loaded profile config "pause-870778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:35:52.792924  449947 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 19:35:52.796352  449947 out.go:179] * Verifying Kubernetes components...
	I1016 19:35:52.796353  449947 out.go:179] * Enabled addons: 
	I1016 19:35:51.624060  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:35:51.624409  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:35:51.624458  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:35:51.624514  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:35:51.666085  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:51.666109  432201 cri.go:89] found id: ""
	I1016 19:35:51.666119  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:35:51.666174  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:51.671693  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:35:51.671769  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:35:51.716899  432201 cri.go:89] found id: ""
	I1016 19:35:51.716924  432201 logs.go:282] 0 containers: []
	W1016 19:35:51.716933  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:35:51.716939  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:35:51.717000  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:35:51.759158  432201 cri.go:89] found id: ""
	I1016 19:35:51.759185  432201 logs.go:282] 0 containers: []
	W1016 19:35:51.759193  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:35:51.759201  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:35:51.759259  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:35:51.800567  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:51.800592  432201 cri.go:89] found id: ""
	I1016 19:35:51.800601  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:35:51.800696  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:51.804831  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:35:51.804905  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 19:35:51.836863  432201 cri.go:89] found id: ""
	I1016 19:35:51.836891  432201 logs.go:282] 0 containers: []
	W1016 19:35:51.836900  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:35:51.836906  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:35:51.836967  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:35:51.876412  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:51.876435  432201 cri.go:89] found id: ""
	I1016 19:35:51.876443  432201 logs.go:282] 1 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88]
	I1016 19:35:51.876501  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:51.880764  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:35:51.880866  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:35:51.911793  432201 cri.go:89] found id: ""
	I1016 19:35:51.911817  432201 logs.go:282] 0 containers: []
	W1016 19:35:51.911825  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:35:51.911832  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:35:51.911952  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:35:51.947496  432201 cri.go:89] found id: ""
	I1016 19:35:51.947522  432201 logs.go:282] 0 containers: []
	W1016 19:35:51.947531  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:35:51.947540  432201 logs.go:123] Gathering logs for dmesg ...
	I1016 19:35:51.947579  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 19:35:51.972497  432201 logs.go:123] Gathering logs for describe nodes ...
	I1016 19:35:51.972527  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 19:35:52.086339  432201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 19:35:52.086365  432201 logs.go:123] Gathering logs for kube-apiserver [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d] ...
	I1016 19:35:52.086379  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:52.143960  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:35:52.143993  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:52.250937  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:35:52.251014  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:52.300844  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:35:52.300877  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 19:35:52.389784  432201 logs.go:123] Gathering logs for container status ...
	I1016 19:35:52.389877  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 19:35:52.439877  432201 logs.go:123] Gathering logs for kubelet ...
	I1016 19:35:52.439902  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 19:35:55.113924  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:35:55.114316  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:35:55.114357  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:35:55.114411  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:35:55.167548  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:55.167567  432201 cri.go:89] found id: ""
	I1016 19:35:55.167575  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:35:55.167632  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:55.171447  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:35:55.171514  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:35:55.217611  432201 cri.go:89] found id: ""
	I1016 19:35:55.217632  432201 logs.go:282] 0 containers: []
	W1016 19:35:55.217640  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:35:55.217647  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:35:55.217708  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:35:52.799400  449947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:35:52.799566  449947 addons.go:514] duration metric: took 6.642148ms for enable addons: enabled=[]
	I1016 19:35:53.104876  449947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:35:53.121775  449947 node_ready.go:35] waiting up to 6m0s for node "pause-870778" to be "Ready" ...
	I1016 19:35:55.277373  432201 cri.go:89] found id: ""
	I1016 19:35:55.277394  432201 logs.go:282] 0 containers: []
	W1016 19:35:55.277402  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:35:55.277420  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:35:55.277480  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:35:55.316798  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:55.316870  432201 cri.go:89] found id: ""
	I1016 19:35:55.316881  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:35:55.316973  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:55.321029  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:35:55.321096  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 19:35:55.376302  432201 cri.go:89] found id: ""
	I1016 19:35:55.376324  432201 logs.go:282] 0 containers: []
	W1016 19:35:55.376332  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:35:55.376339  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:35:55.376398  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:35:55.419560  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:55.419579  432201 cri.go:89] found id: ""
	I1016 19:35:55.419596  432201 logs.go:282] 1 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88]
	I1016 19:35:55.419654  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:55.424012  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:35:55.424079  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:35:55.458286  432201 cri.go:89] found id: ""
	I1016 19:35:55.458362  432201 logs.go:282] 0 containers: []
	W1016 19:35:55.458384  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:35:55.458404  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:35:55.458501  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:35:55.498252  432201 cri.go:89] found id: ""
	I1016 19:35:55.498274  432201 logs.go:282] 0 containers: []
	W1016 19:35:55.498282  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:35:55.498292  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:35:55.498303  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 19:35:55.571497  432201 logs.go:123] Gathering logs for container status ...
	I1016 19:35:55.571598  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 19:35:55.637787  432201 logs.go:123] Gathering logs for kubelet ...
	I1016 19:35:55.637857  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 19:35:55.784223  432201 logs.go:123] Gathering logs for dmesg ...
	I1016 19:35:55.784333  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 19:35:55.801905  432201 logs.go:123] Gathering logs for describe nodes ...
	I1016 19:35:55.801936  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 19:35:55.933815  432201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 19:35:55.933890  432201 logs.go:123] Gathering logs for kube-apiserver [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d] ...
	I1016 19:35:55.933919  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:55.991654  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:35:55.991728  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:56.101900  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:35:56.101935  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:58.645763  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:35:58.646275  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:35:58.646326  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:35:58.646386  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:35:58.677929  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:58.677949  432201 cri.go:89] found id: ""
	I1016 19:35:58.677958  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:35:58.678019  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:58.682034  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:35:58.682115  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:35:58.707887  432201 cri.go:89] found id: ""
	I1016 19:35:58.707908  432201 logs.go:282] 0 containers: []
	W1016 19:35:58.707916  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:35:58.707923  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:35:58.707979  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:35:58.734327  432201 cri.go:89] found id: ""
	I1016 19:35:58.734349  432201 logs.go:282] 0 containers: []
	W1016 19:35:58.734357  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:35:58.734363  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:35:58.734421  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:35:58.767958  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:58.767977  432201 cri.go:89] found id: ""
	I1016 19:35:58.767986  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:35:58.768044  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:58.772350  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:35:58.772468  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 19:35:58.798419  432201 cri.go:89] found id: ""
	I1016 19:35:58.798442  432201 logs.go:282] 0 containers: []
	W1016 19:35:58.798451  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:35:58.798458  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:35:58.798519  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:35:58.824036  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:58.824114  432201 cri.go:89] found id: ""
	I1016 19:35:58.824140  432201 logs.go:282] 1 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88]
	I1016 19:35:58.824219  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:35:58.827929  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:35:58.828004  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:35:58.855116  432201 cri.go:89] found id: ""
	I1016 19:35:58.855152  432201 logs.go:282] 0 containers: []
	W1016 19:35:58.855161  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:35:58.855187  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:35:58.855301  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:35:58.882514  432201 cri.go:89] found id: ""
	I1016 19:35:58.882539  432201 logs.go:282] 0 containers: []
	W1016 19:35:58.882551  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:35:58.882561  432201 logs.go:123] Gathering logs for describe nodes ...
	I1016 19:35:58.882573  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 19:35:58.985618  432201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 19:35:58.985650  432201 logs.go:123] Gathering logs for kube-apiserver [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d] ...
	I1016 19:35:58.985665  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:35:59.040349  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:35:59.040391  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:35:59.137878  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:35:59.137916  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:35:59.168248  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:35:59.168273  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 19:35:59.234014  432201 logs.go:123] Gathering logs for container status ...
	I1016 19:35:59.234052  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 19:35:59.265710  432201 logs.go:123] Gathering logs for kubelet ...
	I1016 19:35:59.265739  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 19:35:59.396140  432201 logs.go:123] Gathering logs for dmesg ...
	I1016 19:35:59.396176  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 19:35:57.021245  449947 node_ready.go:49] node "pause-870778" is "Ready"
	I1016 19:35:57.021272  449947 node_ready.go:38] duration metric: took 3.899458885s for node "pause-870778" to be "Ready" ...
	I1016 19:35:57.021287  449947 api_server.go:52] waiting for apiserver process to appear ...
	I1016 19:35:57.021349  449947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 19:35:57.043192  449947 api_server.go:72] duration metric: took 4.250580864s to wait for apiserver process to appear ...
	I1016 19:35:57.043214  449947 api_server.go:88] waiting for apiserver healthz status ...
	I1016 19:35:57.043233  449947 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 19:35:57.120670  449947 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 19:35:57.120783  449947 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 19:35:57.543411  449947 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 19:35:57.555963  449947 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 19:35:57.555997  449947 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 19:35:58.043377  449947 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 19:35:58.054511  449947 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1016 19:35:58.055730  449947 api_server.go:141] control plane version: v1.34.1
	I1016 19:35:58.055811  449947 api_server.go:131] duration metric: took 1.012589085s to wait for apiserver health ...
	I1016 19:35:58.055835  449947 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 19:35:58.059674  449947 system_pods.go:59] 8 kube-system pods found
	I1016 19:35:58.059715  449947 system_pods.go:61] "coredns-66bc5c9577-j2chq" [5654ae7b-c8b7-43ca-a406-a2b469ab6a89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:35:58.059757  449947 system_pods.go:61] "coredns-66bc5c9577-vhkhz" [a0654543-a145-4d72-961a-72e07066dcf9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:35:58.059771  449947 system_pods.go:61] "etcd-pause-870778" [547edc9d-3421-475f-bb7b-661d90b63c00] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 19:35:58.059785  449947 system_pods.go:61] "kindnet-tljwg" [1f3c571f-2279-4d82-af72-febc2dd3f054] Running
	I1016 19:35:58.059798  449947 system_pods.go:61] "kube-apiserver-pause-870778" [a1ebe7af-c4db-41e2-943a-f4697671b7b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 19:35:58.059819  449947 system_pods.go:61] "kube-controller-manager-pause-870778" [fbd3c061-36a0-4f91-809d-b0ac670cc309] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 19:35:58.059835  449947 system_pods.go:61] "kube-proxy-x4dmw" [2ee80808-9395-44ba-aeee-51c69c0b1f69] Running
	I1016 19:35:58.059845  449947 system_pods.go:61] "kube-scheduler-pause-870778" [a48c3a35-740b-40f2-abf6-b13e1e0ad761] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 19:35:58.059860  449947 system_pods.go:74] duration metric: took 4.002992ms to wait for pod list to return data ...
	I1016 19:35:58.059876  449947 default_sa.go:34] waiting for default service account to be created ...
	I1016 19:35:58.063076  449947 default_sa.go:45] found service account: "default"
	I1016 19:35:58.063106  449947 default_sa.go:55] duration metric: took 3.222707ms for default service account to be created ...
	I1016 19:35:58.063117  449947 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 19:35:58.067074  449947 system_pods.go:86] 8 kube-system pods found
	I1016 19:35:58.067128  449947 system_pods.go:89] "coredns-66bc5c9577-j2chq" [5654ae7b-c8b7-43ca-a406-a2b469ab6a89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:35:58.067141  449947 system_pods.go:89] "coredns-66bc5c9577-vhkhz" [a0654543-a145-4d72-961a-72e07066dcf9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:35:58.067185  449947 system_pods.go:89] "etcd-pause-870778" [547edc9d-3421-475f-bb7b-661d90b63c00] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 19:35:58.067200  449947 system_pods.go:89] "kindnet-tljwg" [1f3c571f-2279-4d82-af72-febc2dd3f054] Running
	I1016 19:35:58.067206  449947 system_pods.go:89] "kube-apiserver-pause-870778" [a1ebe7af-c4db-41e2-943a-f4697671b7b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 19:35:58.067215  449947 system_pods.go:89] "kube-controller-manager-pause-870778" [fbd3c061-36a0-4f91-809d-b0ac670cc309] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 19:35:58.067224  449947 system_pods.go:89] "kube-proxy-x4dmw" [2ee80808-9395-44ba-aeee-51c69c0b1f69] Running
	I1016 19:35:58.067231  449947 system_pods.go:89] "kube-scheduler-pause-870778" [a48c3a35-740b-40f2-abf6-b13e1e0ad761] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 19:35:58.067246  449947 system_pods.go:126] duration metric: took 4.116011ms to wait for k8s-apps to be running ...
	I1016 19:35:58.067259  449947 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 19:35:58.067318  449947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:35:58.083369  449947 system_svc.go:56] duration metric: took 16.099494ms WaitForService to wait for kubelet
	I1016 19:35:58.083398  449947 kubeadm.go:586] duration metric: took 5.290792011s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:35:58.083417  449947 node_conditions.go:102] verifying NodePressure condition ...
	I1016 19:35:58.086767  449947 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 19:35:58.086801  449947 node_conditions.go:123] node cpu capacity is 2
	I1016 19:35:58.086815  449947 node_conditions.go:105] duration metric: took 3.392217ms to run NodePressure ...
	I1016 19:35:58.086828  449947 start.go:241] waiting for startup goroutines ...
	I1016 19:35:58.086836  449947 start.go:246] waiting for cluster config update ...
	I1016 19:35:58.086844  449947 start.go:255] writing updated cluster config ...
	I1016 19:35:58.087204  449947 ssh_runner.go:195] Run: rm -f paused
	I1016 19:35:58.091592  449947 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:35:58.092360  449947 kapi.go:59] client config for pause-870778: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/profiles/pause-870778/client.key", CAFile:"/home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1016 19:35:58.096346  449947 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-j2chq" in "kube-system" namespace to be "Ready" or be gone ...
	W1016 19:36:00.169876  449947 pod_ready.go:104] pod "coredns-66bc5c9577-j2chq" is not "Ready", error: <nil>
	I1016 19:36:01.914869  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:36:01.915319  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:36:01.915376  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:36:01.915433  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:36:01.942668  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:36:01.942731  432201 cri.go:89] found id: ""
	I1016 19:36:01.942765  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:36:01.942834  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:01.946609  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:36:01.946708  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:36:01.974062  432201 cri.go:89] found id: ""
	I1016 19:36:01.974083  432201 logs.go:282] 0 containers: []
	W1016 19:36:01.974092  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:36:01.974099  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:36:01.974182  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:36:02.000901  432201 cri.go:89] found id: ""
	I1016 19:36:02.000924  432201 logs.go:282] 0 containers: []
	W1016 19:36:02.000933  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:36:02.000939  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:36:02.001042  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:36:02.032454  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:36:02.032479  432201 cri.go:89] found id: ""
	I1016 19:36:02.032488  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:36:02.032581  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:02.037108  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:36:02.037235  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 19:36:02.069355  432201 cri.go:89] found id: ""
	I1016 19:36:02.069376  432201 logs.go:282] 0 containers: []
	W1016 19:36:02.069385  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:36:02.069422  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:36:02.069510  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:36:02.106166  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:36:02.106188  432201 cri.go:89] found id: ""
	I1016 19:36:02.106197  432201 logs.go:282] 1 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88]
	I1016 19:36:02.106285  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:02.110591  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:36:02.110699  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:36:02.144585  432201 cri.go:89] found id: ""
	I1016 19:36:02.144611  432201 logs.go:282] 0 containers: []
	W1016 19:36:02.144619  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:36:02.144626  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:36:02.144713  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:36:02.172778  432201 cri.go:89] found id: ""
	I1016 19:36:02.172810  432201 logs.go:282] 0 containers: []
	W1016 19:36:02.172818  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:36:02.172828  432201 logs.go:123] Gathering logs for kube-apiserver [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d] ...
	I1016 19:36:02.172871  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:36:02.206091  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:36:02.206128  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:36:02.270642  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:36:02.270679  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:36:02.297468  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:36:02.297496  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 19:36:02.361841  432201 logs.go:123] Gathering logs for container status ...
	I1016 19:36:02.361878  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 19:36:02.407367  432201 logs.go:123] Gathering logs for kubelet ...
	I1016 19:36:02.407393  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 19:36:02.536221  432201 logs.go:123] Gathering logs for dmesg ...
	I1016 19:36:02.536311  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 19:36:02.554810  432201 logs.go:123] Gathering logs for describe nodes ...
	I1016 19:36:02.554839  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 19:36:02.633122  432201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 19:36:05.133343  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:36:05.133830  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:36:05.133904  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:36:05.133982  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:36:05.161489  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:36:05.161521  432201 cri.go:89] found id: ""
	I1016 19:36:05.161530  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:36:05.161598  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:05.165838  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:36:05.165955  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:36:05.192500  432201 cri.go:89] found id: ""
	I1016 19:36:05.192525  432201 logs.go:282] 0 containers: []
	W1016 19:36:05.192534  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:36:05.192541  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:36:05.192612  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:36:05.221023  432201 cri.go:89] found id: ""
	I1016 19:36:05.221051  432201 logs.go:282] 0 containers: []
	W1016 19:36:05.221060  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:36:05.221067  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:36:05.221124  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:36:05.256554  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:36:05.256578  432201 cri.go:89] found id: ""
	I1016 19:36:05.256587  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:36:05.256653  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:05.260604  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:36:05.260701  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	W1016 19:36:02.602907  449947 pod_ready.go:104] pod "coredns-66bc5c9577-j2chq" is not "Ready", error: <nil>
	I1016 19:36:04.601652  449947 pod_ready.go:94] pod "coredns-66bc5c9577-j2chq" is "Ready"
	I1016 19:36:04.601681  449947 pod_ready.go:86] duration metric: took 6.505303257s for pod "coredns-66bc5c9577-j2chq" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:04.601691  449947 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vhkhz" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:04.606659  449947 pod_ready.go:94] pod "coredns-66bc5c9577-vhkhz" is "Ready"
	I1016 19:36:04.606688  449947 pod_ready.go:86] duration metric: took 4.990287ms for pod "coredns-66bc5c9577-vhkhz" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:04.609655  449947 pod_ready.go:83] waiting for pod "etcd-pause-870778" in "kube-system" namespace to be "Ready" or be gone ...
	W1016 19:36:06.616311  449947 pod_ready.go:104] pod "etcd-pause-870778" is not "Ready", error: <nil>
	I1016 19:36:05.289453  432201 cri.go:89] found id: ""
	I1016 19:36:05.289478  432201 logs.go:282] 0 containers: []
	W1016 19:36:05.289487  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:36:05.289493  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:36:05.289597  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:36:05.322412  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:36:05.322437  432201 cri.go:89] found id: ""
	I1016 19:36:05.322446  432201 logs.go:282] 1 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88]
	I1016 19:36:05.322517  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:05.326576  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:36:05.326658  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:36:05.361976  432201 cri.go:89] found id: ""
	I1016 19:36:05.362002  432201 logs.go:282] 0 containers: []
	W1016 19:36:05.362019  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:36:05.362026  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:36:05.362085  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:36:05.390534  432201 cri.go:89] found id: ""
	I1016 19:36:05.390565  432201 logs.go:282] 0 containers: []
	W1016 19:36:05.390574  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:36:05.390586  432201 logs.go:123] Gathering logs for kubelet ...
	I1016 19:36:05.390597  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 19:36:05.510838  432201 logs.go:123] Gathering logs for dmesg ...
	I1016 19:36:05.510878  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 19:36:05.531695  432201 logs.go:123] Gathering logs for describe nodes ...
	I1016 19:36:05.531731  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 19:36:05.620173  432201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 19:36:05.620196  432201 logs.go:123] Gathering logs for kube-apiserver [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d] ...
	I1016 19:36:05.620209  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:36:05.654535  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:36:05.654569  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:36:05.721323  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:36:05.721357  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:36:05.750367  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:36:05.750396  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 19:36:05.812431  432201 logs.go:123] Gathering logs for container status ...
	I1016 19:36:05.812470  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 19:36:08.346593  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:36:08.346945  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:36:08.346983  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:36:08.347034  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:36:08.374990  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:36:08.375024  432201 cri.go:89] found id: ""
	I1016 19:36:08.375033  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:36:08.375101  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:08.379048  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:36:08.379119  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:36:08.407979  432201 cri.go:89] found id: ""
	I1016 19:36:08.408001  432201 logs.go:282] 0 containers: []
	W1016 19:36:08.408010  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:36:08.408016  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:36:08.408075  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:36:08.439179  432201 cri.go:89] found id: ""
	I1016 19:36:08.439203  432201 logs.go:282] 0 containers: []
	W1016 19:36:08.439211  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:36:08.439218  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:36:08.439284  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:36:08.467360  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:36:08.467383  432201 cri.go:89] found id: ""
	I1016 19:36:08.467392  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:36:08.467450  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:08.471427  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:36:08.471504  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 19:36:08.498956  432201 cri.go:89] found id: ""
	I1016 19:36:08.498978  432201 logs.go:282] 0 containers: []
	W1016 19:36:08.498986  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:36:08.498992  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:36:08.499055  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:36:08.534062  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:36:08.534086  432201 cri.go:89] found id: ""
	I1016 19:36:08.534094  432201 logs.go:282] 1 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88]
	I1016 19:36:08.534151  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:08.538437  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:36:08.538532  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:36:08.564760  432201 cri.go:89] found id: ""
	I1016 19:36:08.564798  432201 logs.go:282] 0 containers: []
	W1016 19:36:08.564823  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:36:08.564832  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:36:08.564909  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:36:08.592599  432201 cri.go:89] found id: ""
	I1016 19:36:08.592626  432201 logs.go:282] 0 containers: []
	W1016 19:36:08.592635  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:36:08.592644  432201 logs.go:123] Gathering logs for describe nodes ...
	I1016 19:36:08.592656  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 19:36:08.670096  432201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 19:36:08.670118  432201 logs.go:123] Gathering logs for kube-apiserver [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d] ...
	I1016 19:36:08.670131  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:36:08.702830  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:36:08.702864  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:36:08.770726  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:36:08.770766  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:36:08.797493  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:36:08.797521  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 19:36:08.859192  432201 logs.go:123] Gathering logs for container status ...
	I1016 19:36:08.859228  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 19:36:08.890327  432201 logs.go:123] Gathering logs for kubelet ...
	I1016 19:36:08.890353  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 19:36:09.014778  432201 logs.go:123] Gathering logs for dmesg ...
	I1016 19:36:09.014815  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1016 19:36:09.115798  449947 pod_ready.go:104] pod "etcd-pause-870778" is not "Ready", error: <nil>
	I1016 19:36:11.616476  449947 pod_ready.go:94] pod "etcd-pause-870778" is "Ready"
	I1016 19:36:11.616507  449947 pod_ready.go:86] duration metric: took 7.00682367s for pod "etcd-pause-870778" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:11.619637  449947 pod_ready.go:83] waiting for pod "kube-apiserver-pause-870778" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:12.160905  449947 pod_ready.go:94] pod "kube-apiserver-pause-870778" is "Ready"
	I1016 19:36:12.160930  449947 pod_ready.go:86] duration metric: took 541.259614ms for pod "kube-apiserver-pause-870778" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:12.172262  449947 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-870778" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:12.185402  449947 pod_ready.go:94] pod "kube-controller-manager-pause-870778" is "Ready"
	I1016 19:36:12.185427  449947 pod_ready.go:86] duration metric: took 13.141572ms for pod "kube-controller-manager-pause-870778" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:12.198318  449947 pod_ready.go:83] waiting for pod "kube-proxy-x4dmw" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:12.220625  449947 pod_ready.go:94] pod "kube-proxy-x4dmw" is "Ready"
	I1016 19:36:12.220648  449947 pod_ready.go:86] duration metric: took 22.306407ms for pod "kube-proxy-x4dmw" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:12.413797  449947 pod_ready.go:83] waiting for pod "kube-scheduler-pause-870778" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:12.813022  449947 pod_ready.go:94] pod "kube-scheduler-pause-870778" is "Ready"
	I1016 19:36:12.813052  449947 pod_ready.go:86] duration metric: took 399.228205ms for pod "kube-scheduler-pause-870778" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:36:12.813064  449947 pod_ready.go:40] duration metric: took 14.721436649s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:36:12.869476  449947 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1016 19:36:12.872671  449947 out.go:179] * Done! kubectl is now configured to use "pause-870778" cluster and "default" namespace by default
	I1016 19:36:11.532133  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:36:11.532609  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:36:11.532652  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:36:11.532724  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:36:11.581116  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:36:11.581162  432201 cri.go:89] found id: ""
	I1016 19:36:11.581172  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:36:11.581229  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:11.586933  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:36:11.587058  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:36:11.623670  432201 cri.go:89] found id: ""
	I1016 19:36:11.623743  432201 logs.go:282] 0 containers: []
	W1016 19:36:11.623765  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:36:11.623788  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:36:11.623911  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:36:11.656560  432201 cri.go:89] found id: ""
	I1016 19:36:11.656644  432201 logs.go:282] 0 containers: []
	W1016 19:36:11.656668  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:36:11.656691  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:36:11.656822  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:36:11.685874  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:36:11.685896  432201 cri.go:89] found id: ""
	I1016 19:36:11.685906  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:36:11.685984  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:11.689659  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:36:11.689759  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 19:36:11.714889  432201 cri.go:89] found id: ""
	I1016 19:36:11.714915  432201 logs.go:282] 0 containers: []
	W1016 19:36:11.714923  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:36:11.714930  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:36:11.715050  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:36:11.742632  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:36:11.742654  432201 cri.go:89] found id: ""
	I1016 19:36:11.742663  432201 logs.go:282] 1 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88]
	I1016 19:36:11.742761  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:11.746451  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:36:11.746552  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:36:11.775062  432201 cri.go:89] found id: ""
	I1016 19:36:11.775145  432201 logs.go:282] 0 containers: []
	W1016 19:36:11.775169  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:36:11.775192  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:36:11.775257  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:36:11.809728  432201 cri.go:89] found id: ""
	I1016 19:36:11.809813  432201 logs.go:282] 0 containers: []
	W1016 19:36:11.809837  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:36:11.809880  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:36:11.809911  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:36:11.889311  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:36:11.889350  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:36:11.917496  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:36:11.917524  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 19:36:11.979808  432201 logs.go:123] Gathering logs for container status ...
	I1016 19:36:11.979847  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 19:36:12.026240  432201 logs.go:123] Gathering logs for kubelet ...
	I1016 19:36:12.026273  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 19:36:12.167563  432201 logs.go:123] Gathering logs for dmesg ...
	I1016 19:36:12.167683  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 19:36:12.191842  432201 logs.go:123] Gathering logs for describe nodes ...
	I1016 19:36:12.191878  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 19:36:12.275232  432201 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 19:36:12.275255  432201 logs.go:123] Gathering logs for kube-apiserver [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d] ...
	I1016 19:36:12.275268  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:36:14.809387  432201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:36:14.809793  432201 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1016 19:36:14.809841  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 19:36:14.809903  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 19:36:14.841599  432201 cri.go:89] found id: "1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d"
	I1016 19:36:14.841621  432201 cri.go:89] found id: ""
	I1016 19:36:14.841630  432201 logs.go:282] 1 containers: [1ed6b21fdbd512add5b2d857604b57bf1e47f64be213cda6d53cf301b12c985d]
	I1016 19:36:14.841686  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:14.845445  432201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 19:36:14.845529  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 19:36:14.872194  432201 cri.go:89] found id: ""
	I1016 19:36:14.872221  432201 logs.go:282] 0 containers: []
	W1016 19:36:14.872229  432201 logs.go:284] No container was found matching "etcd"
	I1016 19:36:14.872236  432201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 19:36:14.872297  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 19:36:14.898257  432201 cri.go:89] found id: ""
	I1016 19:36:14.898283  432201 logs.go:282] 0 containers: []
	W1016 19:36:14.898291  432201 logs.go:284] No container was found matching "coredns"
	I1016 19:36:14.898298  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 19:36:14.898360  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 19:36:14.925331  432201 cri.go:89] found id: "0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:36:14.925353  432201 cri.go:89] found id: ""
	I1016 19:36:14.925361  432201 logs.go:282] 1 containers: [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd]
	I1016 19:36:14.925419  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:14.929314  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 19:36:14.929388  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 19:36:14.956388  432201 cri.go:89] found id: ""
	I1016 19:36:14.956412  432201 logs.go:282] 0 containers: []
	W1016 19:36:14.956420  432201 logs.go:284] No container was found matching "kube-proxy"
	I1016 19:36:14.956426  432201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 19:36:14.956487  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 19:36:14.983506  432201 cri.go:89] found id: "47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:36:14.983527  432201 cri.go:89] found id: ""
	I1016 19:36:14.983537  432201 logs.go:282] 1 containers: [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88]
	I1016 19:36:14.983619  432201 ssh_runner.go:195] Run: which crictl
	I1016 19:36:14.988126  432201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 19:36:14.988196  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 19:36:15.033311  432201 cri.go:89] found id: ""
	I1016 19:36:15.033347  432201 logs.go:282] 0 containers: []
	W1016 19:36:15.033358  432201 logs.go:284] No container was found matching "kindnet"
	I1016 19:36:15.033365  432201 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 19:36:15.033439  432201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 19:36:15.083841  432201 cri.go:89] found id: ""
	I1016 19:36:15.083869  432201 logs.go:282] 0 containers: []
	W1016 19:36:15.083878  432201 logs.go:284] No container was found matching "storage-provisioner"
	I1016 19:36:15.083887  432201 logs.go:123] Gathering logs for kube-scheduler [0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd] ...
	I1016 19:36:15.083900  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0c051943d1de280fcac4a19ca7dd45a0a36e66798e594145fa86520fa22f41dd"
	I1016 19:36:15.164897  432201 logs.go:123] Gathering logs for kube-controller-manager [47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88] ...
	I1016 19:36:15.164937  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47b9b240fbd013cdfbc603b0d67833bd1a60eb57978852687ebb11e05ac5cf88"
	I1016 19:36:15.207989  432201 logs.go:123] Gathering logs for CRI-O ...
	I1016 19:36:15.208019  432201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	
	
	==> CRI-O <==
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.680151397Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.681440362Z" level=info msg="Started container" PID=2359 containerID=9a54382d4bc8648852c26609dc83acf27dc0010c1d0d9f18fb11f136c720bd41 description=kube-system/kindnet-tljwg/kindnet-cni id=393b7077-e668-441f-beef-49405fac3759 name=/runtime.v1.RuntimeService/StartContainer sandboxID=24c574eb8d980f94b71dce0f1fa0488aff165b4b550683df9ef78edda497b152
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.677528463Z" level=info msg="Starting container: cb7ea64c57a6b2d64ce9ec1cd5c5305bb5160b5d51cdc02f56727cd3bc062e9f" id=82b3203c-e4b5-480b-970f-d32840cb40f3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.692862086Z" level=info msg="Started container" PID=2362 containerID=cb7ea64c57a6b2d64ce9ec1cd5c5305bb5160b5d51cdc02f56727cd3bc062e9f description=kube-system/coredns-66bc5c9577-j2chq/coredns id=82b3203c-e4b5-480b-970f-d32840cb40f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6392c90f6af90784c71f61e7a4bf28c524dad3a05b4502db375c805e9a1f1753
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.717523732Z" level=info msg="Created container 05bbf102a21139c3005b2c4c4c00ba00d6bd04b54f8f16436a691c6a2bde8b9e: kube-system/etcd-pause-870778/etcd" id=4d966c5f-69dc-4394-bffe-e5df0408c568 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.720208557Z" level=info msg="Starting container: 05bbf102a21139c3005b2c4c4c00ba00d6bd04b54f8f16436a691c6a2bde8b9e" id=3e90ea73-e9bc-4007-904f-efeb542de07d name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.723550878Z" level=info msg="Started container" PID=2372 containerID=05bbf102a21139c3005b2c4c4c00ba00d6bd04b54f8f16436a691c6a2bde8b9e description=kube-system/etcd-pause-870778/etcd id=3e90ea73-e9bc-4007-904f-efeb542de07d name=/runtime.v1.RuntimeService/StartContainer sandboxID=50ed380698d1c6e0c0e39c6d998982a7501c4f8557377a0add9c807c595831e8
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.736938123Z" level=info msg="Created container f1db270c13de780f6a205b6b4d186670276f05adeada73398e9bd6b30fd41e6a: kube-system/kube-controller-manager-pause-870778/kube-controller-manager" id=4ee29a19-56b5-459b-add4-d7e1a2fe187a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.738840981Z" level=info msg="Starting container: f1db270c13de780f6a205b6b4d186670276f05adeada73398e9bd6b30fd41e6a" id=849fc6e5-2d2b-4ef1-9229-6df0648d7d57 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.748857047Z" level=info msg="Started container" PID=2396 containerID=f1db270c13de780f6a205b6b4d186670276f05adeada73398e9bd6b30fd41e6a description=kube-system/kube-controller-manager-pause-870778/kube-controller-manager id=849fc6e5-2d2b-4ef1-9229-6df0648d7d57 name=/runtime.v1.RuntimeService/StartContainer sandboxID=962e5937049b3b13276548c5064f08d6543bcad7c21333f1a908b74e76bcdea2
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.862343943Z" level=info msg="Created container 4059e38191b26fe0e8a6fae7b8b3aa08c4fb288de2fed7b7b8c1d56b2fdf6ff0: kube-system/kube-proxy-x4dmw/kube-proxy" id=c5499f56-777d-4d97-8286-414b1aa15dc3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.86298877Z" level=info msg="Starting container: 4059e38191b26fe0e8a6fae7b8b3aa08c4fb288de2fed7b7b8c1d56b2fdf6ff0" id=cbbe9152-40a9-4689-a26d-7c2e3d3f0afa name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:35:52 pause-870778 crio[2098]: time="2025-10-16T19:35:52.866590029Z" level=info msg="Started container" PID=2378 containerID=4059e38191b26fe0e8a6fae7b8b3aa08c4fb288de2fed7b7b8c1d56b2fdf6ff0 description=kube-system/kube-proxy-x4dmw/kube-proxy id=cbbe9152-40a9-4689-a26d-7c2e3d3f0afa name=/runtime.v1.RuntimeService/StartContainer sandboxID=07f9c4fb2e4beedb8929c542cc23716f3f467460d36baf07a8297bda674a9762
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.007671112Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.012882331Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.012926729Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.012953264Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.017028364Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.017076668Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.017102875Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.021115329Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.021192154Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.021218402Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.024893041Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:36:03 pause-870778 crio[2098]: time="2025-10-16T19:36:03.024991503Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	f1db270c13de7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   26 seconds ago       Running             kube-controller-manager   1                   962e5937049b3       kube-controller-manager-pause-870778   kube-system
	4059e38191b26       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   26 seconds ago       Running             kube-proxy                1                   07f9c4fb2e4be       kube-proxy-x4dmw                       kube-system
	cb7ea64c57a6b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   26 seconds ago       Running             coredns                   1                   6392c90f6af90       coredns-66bc5c9577-j2chq               kube-system
	9a54382d4bc86       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   26 seconds ago       Running             kindnet-cni               1                   24c574eb8d980       kindnet-tljwg                          kube-system
	ae50feda76840       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   26 seconds ago       Running             coredns                   1                   d031a84b3b633       coredns-66bc5c9577-vhkhz               kube-system
	05bbf102a2113       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   26 seconds ago       Running             etcd                      1                   50ed380698d1c       etcd-pause-870778                      kube-system
	7abef40542740       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   26 seconds ago       Running             kube-apiserver            1                   3ef2cdbf8aead       kube-apiserver-pause-870778            kube-system
	453a3e3ee78d5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   26 seconds ago       Running             kube-scheduler            1                   95f499bdf08b8       kube-scheduler-pause-870778            kube-system
	36bd434b7df4f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   40 seconds ago       Exited              coredns                   0                   6392c90f6af90       coredns-66bc5c9577-j2chq               kube-system
	998613c05e7f1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   40 seconds ago       Exited              coredns                   0                   d031a84b3b633       coredns-66bc5c9577-vhkhz               kube-system
	3b392ff5a2e8e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   24c574eb8d980       kindnet-tljwg                          kube-system
	1fa43c29e5044       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   07f9c4fb2e4be       kube-proxy-x4dmw                       kube-system
	78a959960479c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   50ed380698d1c       etcd-pause-870778                      kube-system
	976c969aa054f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   3ef2cdbf8aead       kube-apiserver-pause-870778            kube-system
	7832a0d4d815d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   962e5937049b3       kube-controller-manager-pause-870778   kube-system
	6a93a6454e89d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   95f499bdf08b8       kube-scheduler-pause-870778            kube-system
	
	
	==> coredns [36bd434b7df4ff2386447f12fc15907a45580613a54171383ed220631e0a295b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42325 - 33273 "HINFO IN 289888803237167472.8511805586887677807. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.008637089s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [998613c05e7f15a32fb55e0bc139d53f8fefc8dfe93ddf08bb1d48367009bc13] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38578 - 49796 "HINFO IN 1497488450740844195.3611410341252859870. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014022943s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ae50feda76840007009f20128d2985fc95c60eb2bd7543095ac670363b69844c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38811 - 61888 "HINFO IN 8722966920593532517.6734627329511225537. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024005162s
	
	
	==> coredns [cb7ea64c57a6b2d64ce9ec1cd5c5305bb5160b5d51cdc02f56727cd3bc062e9f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36045 - 22332 "HINFO IN 761004753842047284.7834603073744702931. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00768399s
	
	
	==> describe nodes <==
	Name:               pause-870778
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-870778
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=pause-870778
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T19_34_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 19:34:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-870778
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:36:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:35:38 +0000   Thu, 16 Oct 2025 19:34:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:35:38 +0000   Thu, 16 Oct 2025 19:34:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:35:38 +0000   Thu, 16 Oct 2025 19:34:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 19:35:38 +0000   Thu, 16 Oct 2025 19:35:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-870778
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                37abcd1e-2ee0-4c68-904f-ac1f5cf6438e
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-j2chq                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     82s
	  kube-system                 coredns-66bc5c9577-vhkhz                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     82s
	  kube-system                 etcd-pause-870778                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         89s
	  kube-system                 kindnet-tljwg                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      83s
	  kube-system                 kube-apiserver-pause-870778             250m (12%)    0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-controller-manager-pause-870778    200m (10%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-proxy-x4dmw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-scheduler-pause-870778             100m (5%)     0 (0%)      0 (0%)           0 (0%)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 81s                kube-proxy       
	  Normal   Starting                 22s                kube-proxy       
	  Normal   NodeHasSufficientPID     96s (x8 over 96s)  kubelet          Node pause-870778 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 96s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  96s (x8 over 96s)  kubelet          Node pause-870778 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    96s (x8 over 96s)  kubelet          Node pause-870778 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 96s                kubelet          Starting kubelet.
	  Normal   Starting                 87s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 87s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  87s                kubelet          Node pause-870778 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    87s                kubelet          Node pause-870778 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     87s                kubelet          Node pause-870778 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           83s                node-controller  Node pause-870778 event: Registered Node pause-870778 in Controller
	  Normal   NodeReady                41s                kubelet          Node pause-870778 status is now: NodeReady
	  Normal   RegisteredNode           19s                node-controller  Node pause-870778 event: Registered Node pause-870778 in Controller
	
	
	==> dmesg <==
	[Oct16 18:59] overlayfs: idmapped layers are currently not supported
	[ +38.025144] overlayfs: idmapped layers are currently not supported
	[Oct16 19:08] overlayfs: idmapped layers are currently not supported
	[  +3.621058] overlayfs: idmapped layers are currently not supported
	[ +41.218849] overlayfs: idmapped layers are currently not supported
	[Oct16 19:09] overlayfs: idmapped layers are currently not supported
	[Oct16 19:11] overlayfs: idmapped layers are currently not supported
	[Oct16 19:16] overlayfs: idmapped layers are currently not supported
	[ +33.922450] overlayfs: idmapped layers are currently not supported
	[Oct16 19:18] overlayfs: idmapped layers are currently not supported
	[Oct16 19:19] overlayfs: idmapped layers are currently not supported
	[Oct16 19:20] overlayfs: idmapped layers are currently not supported
	[Oct16 19:21] overlayfs: idmapped layers are currently not supported
	[Oct16 19:22] overlayfs: idmapped layers are currently not supported
	[  +5.025487] overlayfs: idmapped layers are currently not supported
	[Oct16 19:23] overlayfs: idmapped layers are currently not supported
	[ +28.397927] overlayfs: idmapped layers are currently not supported
	[Oct16 19:24] overlayfs: idmapped layers are currently not supported
	[ +25.533019] overlayfs: idmapped layers are currently not supported
	[Oct16 19:26] overlayfs: idmapped layers are currently not supported
	[Oct16 19:27] overlayfs: idmapped layers are currently not supported
	[Oct16 19:29] overlayfs: idmapped layers are currently not supported
	[Oct16 19:31] overlayfs: idmapped layers are currently not supported
	[Oct16 19:32] overlayfs: idmapped layers are currently not supported
	[Oct16 19:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [05bbf102a21139c3005b2c4c4c00ba00d6bd04b54f8f16436a691c6a2bde8b9e] <==
	{"level":"warn","ts":"2025-10-16T19:35:54.998291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.054901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.097292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.108836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.166761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.214322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.251211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.274347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.316938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.350083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.389595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.418079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.441699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.471422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.509854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.542656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.570358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.590772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.619506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.657090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.694141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.734344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.805372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.835418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:35:55.949454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53380","server-name":"","error":"EOF"}
	
	
	==> etcd [78a959960479c52d4c849b6fa6022c2f23f915fb8f47d0dee2a3b13fbbd7af18] <==
	{"level":"warn","ts":"2025-10-16T19:34:47.389504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:34:47.398217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:34:47.421835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:34:47.456219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:34:47.474102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:34:47.506674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:34:47.590614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34616","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-16T19:35:44.122613Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-16T19:35:44.122655Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-870778","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-16T19:35:44.122729Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-16T19:35:44.277921Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-16T19:35:44.279403Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-16T19:35:44.279458Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-16T19:35:44.279536Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-16T19:35:44.279554Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-16T19:35:44.279539Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-16T19:35:44.279627Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-16T19:35:44.279660Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-16T19:35:44.279730Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-16T19:35:44.279748Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-16T19:35:44.279756Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-16T19:35:44.282831Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-16T19:35:44.283414Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-16T19:35:44.283499Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-16T19:35:44.283520Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-870778","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 19:36:19 up  2:18,  0 user,  load average: 2.71, 2.66, 2.35
	Linux pause-870778 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3b392ff5a2e8ee87e2387c57764ba62d125a51fdbb71404ec83edbfb827243a0] <==
	I1016 19:34:57.608017       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 19:34:57.609038       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1016 19:34:57.609257       1 main.go:148] setting mtu 1500 for CNI 
	I1016 19:34:57.609307       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 19:34:57.609325       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T19:34:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 19:34:57.809070       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 19:34:57.809095       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 19:34:57.809105       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 19:34:57.809857       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1016 19:35:27.809706       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1016 19:35:27.809900       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1016 19:35:27.809999       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1016 19:35:27.810074       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1016 19:35:29.009689       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 19:35:29.009724       1 metrics.go:72] Registering metrics
	I1016 19:35:29.009816       1 controller.go:711] "Syncing nftables rules"
	I1016 19:35:37.813204       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:35:37.813254       1 main.go:301] handling current node
	
	
	==> kindnet [9a54382d4bc8648852c26609dc83acf27dc0010c1d0d9f18fb11f136c720bd41] <==
	I1016 19:35:52.833429       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 19:35:52.834287       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1016 19:35:52.834493       1 main.go:148] setting mtu 1500 for CNI 
	I1016 19:35:52.866295       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 19:35:52.866838       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T19:35:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 19:35:53.007693       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 19:35:53.007773       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 19:35:53.007809       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 19:35:53.008301       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1016 19:35:57.209027       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 19:35:57.209085       1 metrics.go:72] Registering metrics
	I1016 19:35:57.209185       1 controller.go:711] "Syncing nftables rules"
	I1016 19:36:03.007191       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:36:03.007284       1 main.go:301] handling current node
	I1016 19:36:13.007991       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:36:13.008035       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7abef405427407987cdfc0d38c0f1eb915e50be06735d2c7f67e3abb3b179695] <==
	I1016 19:35:57.029813       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1016 19:35:57.030076       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1016 19:35:57.030154       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1016 19:35:57.037384       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1016 19:35:57.045427       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1016 19:35:57.046679       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1016 19:35:57.046737       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1016 19:35:57.066808       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 19:35:57.067167       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 19:35:57.100058       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1016 19:35:57.100544       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1016 19:35:57.101393       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1016 19:35:57.101514       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 19:35:57.102044       1 aggregator.go:171] initial CRD sync complete...
	I1016 19:35:57.102106       1 autoregister_controller.go:144] Starting autoregister controller
	I1016 19:35:57.102136       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 19:35:57.102192       1 cache.go:39] Caches are synced for autoregister controller
	I1016 19:35:57.103070       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1016 19:35:57.140203       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1016 19:35:57.725542       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 19:35:59.060261       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 19:36:00.442653       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 19:36:00.636995       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 19:36:00.686859       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 19:36:00.738780       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [976c969aa054f5536aeb2a392d0c178628ec9360569108fed110f8fd94bef670] <==
	W1016 19:35:44.136138       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.136191       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.136263       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.136328       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.137706       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.137889       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.138003       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.138115       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.138213       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.138296       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.138391       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.138485       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.138604       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.138688       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.138804       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.138938       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.139028       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.139133       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.139237       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.139345       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.139448       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.140461       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.140605       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.140672       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1016 19:35:44.142324       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7832a0d4d815d359d4874d18cd9c787088b0d8413ffd5918609a48296d38084e] <==
	I1016 19:34:56.551994       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1016 19:34:56.552070       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1016 19:34:56.552214       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1016 19:34:56.552320       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 19:34:56.552464       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 19:34:56.552522       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 19:34:56.553993       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1016 19:34:56.554500       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1016 19:34:56.554856       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1016 19:34:56.555458       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1016 19:34:56.555712       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1016 19:34:56.555814       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-870778"
	I1016 19:34:56.557342       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:34:56.557415       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1016 19:34:56.560978       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1016 19:34:56.561249       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1016 19:34:56.561317       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1016 19:34:56.561348       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1016 19:34:56.561376       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1016 19:34:56.572586       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-870778" podCIDRs=["10.244.0.0/24"]
	I1016 19:34:56.574199       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:34:56.574317       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 19:34:56.574359       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 19:34:56.574293       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1016 19:35:41.566630       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [f1db270c13de780f6a205b6b4d186670276f05adeada73398e9bd6b30fd41e6a] <==
	I1016 19:36:00.427361       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1016 19:36:00.429349       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1016 19:36:00.429926       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1016 19:36:00.430593       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 19:36:00.430893       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1016 19:36:00.430688       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1016 19:36:00.434680       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 19:36:00.436764       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1016 19:36:00.436974       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1016 19:36:00.437390       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1016 19:36:00.439859       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1016 19:36:00.440039       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:36:00.439870       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1016 19:36:00.440231       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1016 19:36:00.440261       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1016 19:36:00.440275       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1016 19:36:00.440282       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1016 19:36:00.442115       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1016 19:36:00.453494       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1016 19:36:00.453660       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:36:00.453670       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 19:36:00.453677       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 19:36:00.454988       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 19:36:00.460880       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:36:00.460999       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	
	
	==> kube-proxy [1fa43c29e504499b5777d8f02c5cdedd9d2cdae2c7b82bcc937a07f2ae00ef16] <==
	I1016 19:34:57.497438       1 server_linux.go:53] "Using iptables proxy"
	I1016 19:34:57.656236       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 19:34:57.757057       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 19:34:57.757405       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1016 19:34:57.757484       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 19:34:57.779532       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 19:34:57.779597       1 server_linux.go:132] "Using iptables Proxier"
	I1016 19:34:57.784147       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 19:34:57.784501       1 server.go:527] "Version info" version="v1.34.1"
	I1016 19:34:57.784640       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:34:57.786552       1 config.go:200] "Starting service config controller"
	I1016 19:34:57.786634       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 19:34:57.786677       1 config.go:106] "Starting endpoint slice config controller"
	I1016 19:34:57.786714       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 19:34:57.786749       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 19:34:57.786777       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 19:34:57.787584       1 config.go:309] "Starting node config controller"
	I1016 19:34:57.787652       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 19:34:57.787680       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 19:34:57.887775       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 19:34:57.887785       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 19:34:57.887803       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [4059e38191b26fe0e8a6fae7b8b3aa08c4fb288de2fed7b7b8c1d56b2fdf6ff0] <==
	I1016 19:35:53.858632       1 server_linux.go:53] "Using iptables proxy"
	I1016 19:35:54.908981       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 19:35:57.143982       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 19:35:57.144015       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1016 19:35:57.144149       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 19:35:57.251266       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 19:35:57.251330       1 server_linux.go:132] "Using iptables Proxier"
	I1016 19:35:57.265702       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 19:35:57.266062       1 server.go:527] "Version info" version="v1.34.1"
	I1016 19:35:57.266311       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:35:57.278564       1 config.go:200] "Starting service config controller"
	I1016 19:35:57.278675       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 19:35:57.278756       1 config.go:106] "Starting endpoint slice config controller"
	I1016 19:35:57.278806       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 19:35:57.278844       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 19:35:57.278849       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 19:35:57.279571       1 config.go:309] "Starting node config controller"
	I1016 19:35:57.279580       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 19:35:57.279586       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 19:35:57.381234       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 19:35:57.381287       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 19:35:57.381330       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [453a3e3ee78d58a74340babd2fbcac7b8e92bac974c0a00fe84180b09fcc04a5] <==
	I1016 19:35:55.882345       1 serving.go:386] Generated self-signed cert in-memory
	I1016 19:35:57.173083       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1016 19:35:57.173121       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:35:57.185464       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 19:35:57.185651       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1016 19:35:57.185709       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1016 19:35:57.185808       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 19:35:57.189728       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:35:57.189759       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:35:57.189779       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:35:57.189786       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:35:57.286411       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1016 19:35:57.291109       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:35:57.291273       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [6a93a6454e89deb75178b63bcad9e421253c8cf3ad8cd95dee098c421b8dd117] <==
	I1016 19:34:48.581847       1 serving.go:386] Generated self-signed cert in-memory
	I1016 19:34:50.860257       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1016 19:34:50.860355       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:34:50.866300       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1016 19:34:50.866425       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1016 19:34:50.866508       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:34:50.866542       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:34:50.866597       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:34:50.866629       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:34:50.866760       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 19:34:50.866839       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 19:34:50.967331       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:34:50.967448       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1016 19:34:50.967511       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:35:44.116825       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1016 19:35:44.116851       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1016 19:35:44.116870       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1016 19:35:44.116894       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:35:44.116911       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1016 19:35:44.116929       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:35:44.117439       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1016 19:35:44.117463       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 16 19:35:52 pause-870778 kubelet[1322]: I1016 19:35:52.439346    1322 scope.go:117] "RemoveContainer" containerID="998613c05e7f15a32fb55e0bc139d53f8fefc8dfe93ddf08bb1d48367009bc13"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.440404    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-870778\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="992c1732a06ea273ce94eac8d202f813" pod="kube-system/kube-apiserver-pause-870778"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.440827    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-870778\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="cc63652dc3f9bd697e0145e37fc17f48" pod="kube-system/kube-scheduler-pause-870778"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.441287    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x4dmw\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="2ee80808-9395-44ba-aeee-51c69c0b1f69" pod="kube-system/kube-proxy-x4dmw"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.441611    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-tljwg\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1f3c571f-2279-4d82-af72-febc2dd3f054" pod="kube-system/kindnet-tljwg"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.441924    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-vhkhz\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a0654543-a145-4d72-961a-72e07066dcf9" pod="kube-system/coredns-66bc5c9577-vhkhz"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.442233    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-870778\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="35a8880361f76a34123415ec35118bfd" pod="kube-system/etcd-pause-870778"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.442898    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-j2chq\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5654ae7b-c8b7-43ca-a406-a2b469ab6a89" pod="kube-system/coredns-66bc5c9577-j2chq"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: I1016 19:35:52.463982    1322 scope.go:117] "RemoveContainer" containerID="7832a0d4d815d359d4874d18cd9c787088b0d8413ffd5918609a48296d38084e"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.464734    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-870778\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="992c1732a06ea273ce94eac8d202f813" pod="kube-system/kube-apiserver-pause-870778"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.465035    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-870778\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="cc63652dc3f9bd697e0145e37fc17f48" pod="kube-system/kube-scheduler-pause-870778"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.465276    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x4dmw\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="2ee80808-9395-44ba-aeee-51c69c0b1f69" pod="kube-system/kube-proxy-x4dmw"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.465511    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-tljwg\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1f3c571f-2279-4d82-af72-febc2dd3f054" pod="kube-system/kindnet-tljwg"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.465724    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-vhkhz\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a0654543-a145-4d72-961a-72e07066dcf9" pod="kube-system/coredns-66bc5c9577-vhkhz"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.465955    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-870778\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="35a8880361f76a34123415ec35118bfd" pod="kube-system/etcd-pause-870778"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.466174    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-870778\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="f1bf67ee649a97f1300a4eda63a3b1cc" pod="kube-system/kube-controller-manager-pause-870778"
	Oct 16 19:35:52 pause-870778 kubelet[1322]: E1016 19:35:52.466378    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-j2chq\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5654ae7b-c8b7-43ca-a406-a2b469ab6a89" pod="kube-system/coredns-66bc5c9577-j2chq"
	Oct 16 19:35:56 pause-870778 kubelet[1322]: E1016 19:35:56.868652    1322 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-870778\" is forbidden: User \"system:node:pause-870778\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-870778' and this object" podUID="35a8880361f76a34123415ec35118bfd" pod="kube-system/etcd-pause-870778"
	Oct 16 19:35:56 pause-870778 kubelet[1322]: E1016 19:35:56.870390    1322 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-870778\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-870778' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 16 19:35:56 pause-870778 kubelet[1322]: E1016 19:35:56.873113    1322 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-870778\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-870778' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 16 19:35:56 pause-870778 kubelet[1322]: E1016 19:35:56.873252    1322 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-870778\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-870778' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 16 19:35:56 pause-870778 kubelet[1322]: E1016 19:35:56.921474    1322 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-870778\" is forbidden: User \"system:node:pause-870778\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-870778' and this object" podUID="f1bf67ee649a97f1300a4eda63a3b1cc" pod="kube-system/kube-controller-manager-pause-870778"
	Oct 16 19:36:13 pause-870778 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 19:36:13 pause-870778 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 19:36:13 pause-870778 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-870778 -n pause-870778
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-870778 -n pause-870778: exit status 2 (478.826723ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-870778 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-663330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-663330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (278.431823ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:39:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-663330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-663330 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-663330 describe deploy/metrics-server -n kube-system: exit status 1 (90.488215ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-663330 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-663330
helpers_test.go:243: (dbg) docker inspect old-k8s-version-663330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178",
	        "Created": "2025-10-16T19:38:44.050016018Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 466872,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T19:38:44.11083626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178/hostname",
	        "HostsPath": "/var/lib/docker/containers/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178/hosts",
	        "LogPath": "/var/lib/docker/containers/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178-json.log",
	        "Name": "/old-k8s-version-663330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-663330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-663330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178",
	                "LowerDir": "/var/lib/docker/overlay2/91ff1676dfb24263837902c7cf6d793de5cfeecee80400165619f3b3bc9dd706-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/91ff1676dfb24263837902c7cf6d793de5cfeecee80400165619f3b3bc9dd706/merged",
	                "UpperDir": "/var/lib/docker/overlay2/91ff1676dfb24263837902c7cf6d793de5cfeecee80400165619f3b3bc9dd706/diff",
	                "WorkDir": "/var/lib/docker/overlay2/91ff1676dfb24263837902c7cf6d793de5cfeecee80400165619f3b3bc9dd706/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-663330",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-663330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-663330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-663330",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-663330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44a08e3b7dd081ba62e969bc9a10af5bf1aa264f9d4b8040dc821de706f3059d",
	            "SandboxKey": "/var/run/docker/netns/44a08e3b7dd0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-663330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:3a:80:52:4a:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "954005e57e721c33bcbbc4d582e61219818cba738fb17844a562d84e477b2115",
	                    "EndpointID": "e7483706feb05e40b6a504e07d9dbf4dae8c7e3f89ea99946cf33ee50c4000d8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-663330",
	                        "99b40d8e6d48"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-663330 -n old-k8s-version-663330
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-663330 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-663330 logs -n 25: (1.217882266s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-078761 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo docker system info                                                                                                                                                                                                      │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo containerd config dump                                                                                                                                                                                                  │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo crio config                                                                                                                                                                                                             │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ delete  │ -p cilium-078761                                                                                                                                                                                                                              │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │ 16 Oct 25 19:37 UTC │
	│ start   │ -p cert-expiration-828182 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-828182   │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │ 16 Oct 25 19:38 UTC │
	│ delete  │ -p force-systemd-env-871877                                                                                                                                                                                                                   │ force-systemd-env-871877 │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │ 16 Oct 25 19:37 UTC │
	│ start   │ -p cert-options-853056 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-853056      │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │ 16 Oct 25 19:38 UTC │
	│ ssh     │ cert-options-853056 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-853056      │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:38 UTC │
	│ ssh     │ -p cert-options-853056 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-853056      │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:38 UTC │
	│ delete  │ -p cert-options-853056                                                                                                                                                                                                                        │ cert-options-853056      │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:38 UTC │
	│ start   │ -p old-k8s-version-663330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:39 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-663330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 19:38:37
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 19:38:37.827671  466465 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:38:37.827863  466465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:38:37.827891  466465 out.go:374] Setting ErrFile to fd 2...
	I1016 19:38:37.827910  466465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:38:37.828287  466465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:38:37.828762  466465 out.go:368] Setting JSON to false
	I1016 19:38:37.829759  466465 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8447,"bootTime":1760635071,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 19:38:37.829851  466465 start.go:141] virtualization:  
	I1016 19:38:37.834045  466465 out.go:179] * [old-k8s-version-663330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 19:38:37.836911  466465 notify.go:220] Checking for updates...
	I1016 19:38:37.841179  466465 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 19:38:37.844452  466465 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 19:38:37.847695  466465 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:38:37.850882  466465 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 19:38:37.854000  466465 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 19:38:37.857073  466465 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 19:38:37.860677  466465 config.go:182] Loaded profile config "cert-expiration-828182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:38:37.860786  466465 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 19:38:37.892098  466465 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 19:38:37.892205  466465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:38:37.967358  466465 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-16 19:38:37.952110662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:38:37.967459  466465 docker.go:318] overlay module found
	I1016 19:38:37.970767  466465 out.go:179] * Using the docker driver based on user configuration
	I1016 19:38:37.973854  466465 start.go:305] selected driver: docker
	I1016 19:38:37.973873  466465 start.go:925] validating driver "docker" against <nil>
	I1016 19:38:37.973900  466465 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 19:38:37.974712  466465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:38:38.049455  466465 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-16 19:38:38.0395812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:38:38.049610  466465 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 19:38:38.049851  466465 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:38:38.053001  466465 out.go:179] * Using Docker driver with root privileges
	I1016 19:38:38.056087  466465 cni.go:84] Creating CNI manager for ""
	I1016 19:38:38.056182  466465 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:38:38.056195  466465 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1016 19:38:38.056301  466465 start.go:349] cluster config:
	{Name:old-k8s-version-663330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-663330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:38:38.059767  466465 out.go:179] * Starting "old-k8s-version-663330" primary control-plane node in "old-k8s-version-663330" cluster
	I1016 19:38:38.062741  466465 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 19:38:38.065682  466465 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 19:38:38.068712  466465 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 19:38:38.068775  466465 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1016 19:38:38.068856  466465 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1016 19:38:38.068866  466465 cache.go:58] Caching tarball of preloaded images
	I1016 19:38:38.068969  466465 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 19:38:38.068978  466465 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1016 19:38:38.069159  466465 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/config.json ...
	I1016 19:38:38.069328  466465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/config.json: {Name:mk78591cb312912bd7548c74c3b06ff1aa16fcc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:38:38.089639  466465 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 19:38:38.089665  466465 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 19:38:38.089679  466465 cache.go:232] Successfully downloaded all kic artifacts
	I1016 19:38:38.089702  466465 start.go:360] acquireMachinesLock for old-k8s-version-663330: {Name:mkfc6854e34f4bcc7dc2142f3255e3a72c6d316c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:38:38.089822  466465 start.go:364] duration metric: took 101.326µs to acquireMachinesLock for "old-k8s-version-663330"
	I1016 19:38:38.089853  466465 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-663330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-663330 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:38:38.089943  466465 start.go:125] createHost starting for "" (driver="docker")
	I1016 19:38:38.093614  466465 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1016 19:38:38.093929  466465 start.go:159] libmachine.API.Create for "old-k8s-version-663330" (driver="docker")
	I1016 19:38:38.093983  466465 client.go:168] LocalClient.Create starting
	I1016 19:38:38.094068  466465 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem
	I1016 19:38:38.094114  466465 main.go:141] libmachine: Decoding PEM data...
	I1016 19:38:38.094128  466465 main.go:141] libmachine: Parsing certificate...
	I1016 19:38:38.094186  466465 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem
	I1016 19:38:38.094210  466465 main.go:141] libmachine: Decoding PEM data...
	I1016 19:38:38.094220  466465 main.go:141] libmachine: Parsing certificate...
	I1016 19:38:38.094597  466465 cli_runner.go:164] Run: docker network inspect old-k8s-version-663330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1016 19:38:38.113078  466465 cli_runner.go:211] docker network inspect old-k8s-version-663330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1016 19:38:38.113655  466465 network_create.go:284] running [docker network inspect old-k8s-version-663330] to gather additional debugging logs...
	I1016 19:38:38.113705  466465 cli_runner.go:164] Run: docker network inspect old-k8s-version-663330
	W1016 19:38:38.135639  466465 cli_runner.go:211] docker network inspect old-k8s-version-663330 returned with exit code 1
	I1016 19:38:38.135681  466465 network_create.go:287] error running [docker network inspect old-k8s-version-663330]: docker network inspect old-k8s-version-663330: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-663330 not found
	I1016 19:38:38.135693  466465 network_create.go:289] output of [docker network inspect old-k8s-version-663330]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-663330 not found
	
	** /stderr **
	I1016 19:38:38.135804  466465 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:38:38.153079  466465 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7adcf17f22ba IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:ab:9e:ea:f5:d5} reservation:<nil>}
	I1016 19:38:38.153546  466465 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbcb5241e782 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:58:26:d7:8f:45} reservation:<nil>}
	I1016 19:38:38.153800  466465 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-26579fafc836 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:48:af:83:92:ac} reservation:<nil>}
	I1016 19:38:38.154219  466465 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2ce00}
	I1016 19:38:38.154242  466465 network_create.go:124] attempt to create docker network old-k8s-version-663330 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1016 19:38:38.154299  466465 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-663330 old-k8s-version-663330
	I1016 19:38:38.217521  466465 network_create.go:108] docker network old-k8s-version-663330 192.168.76.0/24 created
	I1016 19:38:38.217555  466465 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-663330" container
	I1016 19:38:38.217644  466465 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1016 19:38:38.239568  466465 cli_runner.go:164] Run: docker volume create old-k8s-version-663330 --label name.minikube.sigs.k8s.io=old-k8s-version-663330 --label created_by.minikube.sigs.k8s.io=true
	I1016 19:38:38.257832  466465 oci.go:103] Successfully created a docker volume old-k8s-version-663330
	I1016 19:38:38.257928  466465 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-663330-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-663330 --entrypoint /usr/bin/test -v old-k8s-version-663330:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1016 19:38:38.796710  466465 oci.go:107] Successfully prepared a docker volume old-k8s-version-663330
	I1016 19:38:38.796767  466465 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1016 19:38:38.796788  466465 kic.go:194] Starting extracting preloaded images to volume ...
	I1016 19:38:38.796861  466465 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-663330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1016 19:38:43.973758  466465 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-663330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (5.176856868s)
	I1016 19:38:43.973800  466465 kic.go:203] duration metric: took 5.177009025s to extract preloaded images to volume ...
	W1016 19:38:43.973955  466465 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1016 19:38:43.974095  466465 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1016 19:38:44.032323  466465 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-663330 --name old-k8s-version-663330 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-663330 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-663330 --network old-k8s-version-663330 --ip 192.168.76.2 --volume old-k8s-version-663330:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1016 19:38:44.371818  466465 cli_runner.go:164] Run: docker container inspect old-k8s-version-663330 --format={{.State.Running}}
	I1016 19:38:44.392559  466465 cli_runner.go:164] Run: docker container inspect old-k8s-version-663330 --format={{.State.Status}}
	I1016 19:38:44.415489  466465 cli_runner.go:164] Run: docker exec old-k8s-version-663330 stat /var/lib/dpkg/alternatives/iptables
	I1016 19:38:44.475492  466465 oci.go:144] the created container "old-k8s-version-663330" has a running status.
	I1016 19:38:44.475524  466465 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/old-k8s-version-663330/id_rsa...
	I1016 19:38:45.366765  466465 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21738-288457/.minikube/machines/old-k8s-version-663330/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1016 19:38:45.398148  466465 cli_runner.go:164] Run: docker container inspect old-k8s-version-663330 --format={{.State.Status}}
	I1016 19:38:45.422474  466465 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1016 19:38:45.422499  466465 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-663330 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1016 19:38:45.467831  466465 cli_runner.go:164] Run: docker container inspect old-k8s-version-663330 --format={{.State.Status}}
	I1016 19:38:45.486200  466465 machine.go:93] provisionDockerMachine start ...
	I1016 19:38:45.486318  466465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-663330
	I1016 19:38:45.504574  466465 main.go:141] libmachine: Using SSH client type: native
	I1016 19:38:45.504917  466465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1016 19:38:45.504927  466465 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 19:38:45.505672  466465 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47508->127.0.0.1:33413: read: connection reset by peer
	I1016 19:38:48.653711  466465 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-663330
	
	I1016 19:38:48.653745  466465 ubuntu.go:182] provisioning hostname "old-k8s-version-663330"
	I1016 19:38:48.653814  466465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-663330
	I1016 19:38:48.680386  466465 main.go:141] libmachine: Using SSH client type: native
	I1016 19:38:48.680740  466465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1016 19:38:48.680757  466465 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-663330 && echo "old-k8s-version-663330" | sudo tee /etc/hostname
	I1016 19:38:48.843581  466465 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-663330
	
	I1016 19:38:48.843663  466465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-663330
	I1016 19:38:48.862384  466465 main.go:141] libmachine: Using SSH client type: native
	I1016 19:38:48.862696  466465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1016 19:38:48.862722  466465 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-663330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-663330/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-663330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 19:38:49.018119  466465 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 19:38:49.018201  466465 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 19:38:49.018242  466465 ubuntu.go:190] setting up certificates
	I1016 19:38:49.018297  466465 provision.go:84] configureAuth start
	I1016 19:38:49.018408  466465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-663330
	I1016 19:38:49.040721  466465 provision.go:143] copyHostCerts
	I1016 19:38:49.040798  466465 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 19:38:49.040809  466465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 19:38:49.040890  466465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 19:38:49.040995  466465 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 19:38:49.041000  466465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 19:38:49.041027  466465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 19:38:49.041087  466465 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 19:38:49.041092  466465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 19:38:49.041115  466465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 19:38:49.041212  466465 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-663330 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-663330]
	I1016 19:38:49.492341  466465 provision.go:177] copyRemoteCerts
	I1016 19:38:49.492410  466465 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 19:38:49.492455  466465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-663330
	I1016 19:38:49.508777  466465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/old-k8s-version-663330/id_rsa Username:docker}
	I1016 19:38:49.613096  466465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 19:38:49.632199  466465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1016 19:38:49.651502  466465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1016 19:38:49.669974  466465 provision.go:87] duration metric: took 651.633754ms to configureAuth
	I1016 19:38:49.670000  466465 ubuntu.go:206] setting minikube options for container-runtime
	I1016 19:38:49.670200  466465 config.go:182] Loaded profile config "old-k8s-version-663330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1016 19:38:49.670317  466465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-663330
	I1016 19:38:49.687444  466465 main.go:141] libmachine: Using SSH client type: native
	I1016 19:38:49.687757  466465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1016 19:38:49.687782  466465 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 19:38:49.944699  466465 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 19:38:49.944720  466465 machine.go:96] duration metric: took 4.458493812s to provisionDockerMachine
	I1016 19:38:49.944729  466465 client.go:171] duration metric: took 11.850736339s to LocalClient.Create
	I1016 19:38:49.944743  466465 start.go:167] duration metric: took 11.850816175s to libmachine.API.Create "old-k8s-version-663330"
	I1016 19:38:49.944750  466465 start.go:293] postStartSetup for "old-k8s-version-663330" (driver="docker")
	I1016 19:38:49.944759  466465 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 19:38:49.944823  466465 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 19:38:49.944862  466465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-663330
	I1016 19:38:49.962765  466465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/old-k8s-version-663330/id_rsa Username:docker}
	I1016 19:38:50.069494  466465 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 19:38:50.073034  466465 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 19:38:50.073065  466465 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 19:38:50.073077  466465 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 19:38:50.073163  466465 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 19:38:50.073261  466465 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 19:38:50.073375  466465 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 19:38:50.081322  466465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:38:50.100562  466465 start.go:296] duration metric: took 155.797365ms for postStartSetup
	I1016 19:38:50.100959  466465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-663330
	I1016 19:38:50.118845  466465 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/config.json ...
	I1016 19:38:50.119170  466465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 19:38:50.119222  466465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-663330
	I1016 19:38:50.136398  466465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/old-k8s-version-663330/id_rsa Username:docker}
	I1016 19:38:50.238235  466465 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 19:38:50.243153  466465 start.go:128] duration metric: took 12.153194418s to createHost
	I1016 19:38:50.243179  466465 start.go:83] releasing machines lock for "old-k8s-version-663330", held for 12.153344449s
	I1016 19:38:50.243254  466465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-663330
	I1016 19:38:50.264257  466465 ssh_runner.go:195] Run: cat /version.json
	I1016 19:38:50.264269  466465 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 19:38:50.264307  466465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-663330
	I1016 19:38:50.264327  466465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-663330
	I1016 19:38:50.281512  466465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/old-k8s-version-663330/id_rsa Username:docker}
	I1016 19:38:50.286597  466465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/old-k8s-version-663330/id_rsa Username:docker}
	I1016 19:38:50.467703  466465 ssh_runner.go:195] Run: systemctl --version
	I1016 19:38:50.474140  466465 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 19:38:50.510891  466465 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 19:38:50.515317  466465 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 19:38:50.515438  466465 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 19:38:50.548800  466465 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1016 19:38:50.548826  466465 start.go:495] detecting cgroup driver to use...
	I1016 19:38:50.548872  466465 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 19:38:50.548938  466465 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 19:38:50.565721  466465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 19:38:50.578665  466465 docker.go:218] disabling cri-docker service (if available) ...
	I1016 19:38:50.578764  466465 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 19:38:50.596949  466465 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 19:38:50.615837  466465 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 19:38:50.736713  466465 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 19:38:50.874759  466465 docker.go:234] disabling docker service ...
	I1016 19:38:50.874839  466465 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 19:38:50.900021  466465 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 19:38:50.915391  466465 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 19:38:51.042778  466465 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 19:38:51.176555  466465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 19:38:51.190188  466465 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 19:38:51.204433  466465 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1016 19:38:51.204552  466465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:38:51.214271  466465 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 19:38:51.214400  466465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:38:51.223684  466465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:38:51.233287  466465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:38:51.242638  466465 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 19:38:51.251188  466465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:38:51.260231  466465 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:38:51.273596  466465 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:38:51.282486  466465 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 19:38:51.290258  466465 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 19:38:51.297901  466465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:38:51.410889  466465 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 19:38:51.548971  466465 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 19:38:51.549045  466465 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 19:38:51.553084  466465 start.go:563] Will wait 60s for crictl version
	I1016 19:38:51.553278  466465 ssh_runner.go:195] Run: which crictl
	I1016 19:38:51.556891  466465 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 19:38:51.585048  466465 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 19:38:51.585164  466465 ssh_runner.go:195] Run: crio --version
	I1016 19:38:51.615707  466465 ssh_runner.go:195] Run: crio --version
	I1016 19:38:51.647019  466465 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1016 19:38:51.649848  466465 cli_runner.go:164] Run: docker network inspect old-k8s-version-663330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:38:51.666542  466465 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1016 19:38:51.670442  466465 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:38:51.680412  466465 kubeadm.go:883] updating cluster {Name:old-k8s-version-663330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-663330 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 19:38:51.680527  466465 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1016 19:38:51.680585  466465 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:38:51.713036  466465 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:38:51.713061  466465 crio.go:433] Images already preloaded, skipping extraction
	I1016 19:38:51.713119  466465 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:38:51.738805  466465 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:38:51.738831  466465 cache_images.go:85] Images are preloaded, skipping loading
	I1016 19:38:51.738840  466465 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1016 19:38:51.738925  466465 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-663330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-663330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 19:38:51.739017  466465 ssh_runner.go:195] Run: crio config
	I1016 19:38:51.805684  466465 cni.go:84] Creating CNI manager for ""
	I1016 19:38:51.805705  466465 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:38:51.805722  466465 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 19:38:51.805764  466465 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-663330 NodeName:old-k8s-version-663330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 19:38:51.805946  466465 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-663330"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 19:38:51.806023  466465 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1016 19:38:51.814232  466465 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 19:38:51.814303  466465 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 19:38:51.821963  466465 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1016 19:38:51.841846  466465 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 19:38:51.856373  466465 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1016 19:38:51.871562  466465 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1016 19:38:51.875999  466465 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:38:51.886980  466465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:38:52.005339  466465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:38:52.024143  466465 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330 for IP: 192.168.76.2
	I1016 19:38:52.024226  466465 certs.go:195] generating shared ca certs ...
	I1016 19:38:52.024263  466465 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:38:52.024518  466465 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 19:38:52.024611  466465 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 19:38:52.024639  466465 certs.go:257] generating profile certs ...
	I1016 19:38:52.024757  466465 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.key
	I1016 19:38:52.024811  466465 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.crt with IP's: []
	I1016 19:38:52.284319  466465 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.crt ...
	I1016 19:38:52.284356  466465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.crt: {Name:mk9c49a7261fac46bbdcbbb24dcb7827f1014ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:38:52.284556  466465 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.key ...
	I1016 19:38:52.284572  466465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.key: {Name:mk18b7a2326f16ca1efc293d130f1c3f7171ba27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:38:52.284668  466465 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/apiserver.key.21bd3326
	I1016 19:38:52.284688  466465 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/apiserver.crt.21bd3326 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1016 19:38:52.740142  466465 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/apiserver.crt.21bd3326 ...
	I1016 19:38:52.740172  466465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/apiserver.crt.21bd3326: {Name:mkb41b31d59cc9e8c23a17d8acab7b7ee8a5e3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:38:52.740370  466465 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/apiserver.key.21bd3326 ...
	I1016 19:38:52.740387  466465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/apiserver.key.21bd3326: {Name:mkca6a90a8e445832f113934fb307bedcbd76ba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:38:52.740470  466465 certs.go:382] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/apiserver.crt.21bd3326 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/apiserver.crt
	I1016 19:38:52.740552  466465 certs.go:386] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/apiserver.key.21bd3326 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/apiserver.key
	I1016 19:38:52.740617  466465 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/proxy-client.key
	I1016 19:38:52.740636  466465 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/proxy-client.crt with IP's: []
	I1016 19:38:53.340414  466465 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/proxy-client.crt ...
	I1016 19:38:53.340443  466465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/proxy-client.crt: {Name:mkbc27802c31e032af44bd74d725389be433a9bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:38:53.340644  466465 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/proxy-client.key ...
	I1016 19:38:53.340660  466465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/proxy-client.key: {Name:mkfffd6dc4506d2a4e08d6d1c051d6f6d4372ce1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:38:53.340847  466465 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 19:38:53.340893  466465 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 19:38:53.340905  466465 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 19:38:53.340941  466465 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 19:38:53.340971  466465 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 19:38:53.341003  466465 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 19:38:53.341051  466465 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:38:53.341678  466465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 19:38:53.361800  466465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 19:38:53.381671  466465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 19:38:53.400357  466465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 19:38:53.421322  466465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1016 19:38:53.440686  466465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 19:38:53.461236  466465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 19:38:53.481245  466465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 19:38:53.500162  466465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 19:38:53.519897  466465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 19:38:53.539131  466465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 19:38:53.560124  466465 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 19:38:53.574294  466465 ssh_runner.go:195] Run: openssl version
	I1016 19:38:53.580402  466465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 19:38:53.588627  466465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 19:38:53.592288  466465 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 19:38:53.592393  466465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 19:38:53.633675  466465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 19:38:53.642325  466465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 19:38:53.651109  466465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 19:38:53.654971  466465 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 19:38:53.655073  466465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 19:38:53.699213  466465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 19:38:53.709317  466465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 19:38:53.719042  466465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:38:53.723366  466465 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:38:53.723480  466465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:38:53.770818  466465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 19:38:53.779196  466465 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 19:38:53.782714  466465 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1016 19:38:53.782770  466465 kubeadm.go:400] StartCluster: {Name:old-k8s-version-663330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-663330 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:38:53.782843  466465 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 19:38:53.782904  466465 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 19:38:53.813034  466465 cri.go:89] found id: ""
	I1016 19:38:53.813109  466465 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 19:38:53.821081  466465 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 19:38:53.828917  466465 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1016 19:38:53.829025  466465 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 19:38:53.841692  466465 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1016 19:38:53.841711  466465 kubeadm.go:157] found existing configuration files:
	
	I1016 19:38:53.841769  466465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1016 19:38:53.849652  466465 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1016 19:38:53.849771  466465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1016 19:38:53.857267  466465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1016 19:38:53.864984  466465 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1016 19:38:53.865105  466465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 19:38:53.872636  466465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1016 19:38:53.880906  466465 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1016 19:38:53.880979  466465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1016 19:38:53.888367  466465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1016 19:38:53.896105  466465 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1016 19:38:53.896206  466465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 19:38:53.903535  466465 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1016 19:38:53.949904  466465 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1016 19:38:53.950136  466465 kubeadm.go:318] [preflight] Running pre-flight checks
	I1016 19:38:53.990833  466465 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1016 19:38:53.990912  466465 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1016 19:38:53.990954  466465 kubeadm.go:318] OS: Linux
	I1016 19:38:53.991005  466465 kubeadm.go:318] CGROUPS_CPU: enabled
	I1016 19:38:53.991059  466465 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1016 19:38:53.991111  466465 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1016 19:38:53.991166  466465 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1016 19:38:53.991219  466465 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1016 19:38:53.991273  466465 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1016 19:38:53.991327  466465 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1016 19:38:53.991382  466465 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1016 19:38:53.991435  466465 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1016 19:38:54.086810  466465 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1016 19:38:54.086936  466465 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1016 19:38:54.087045  466465 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1016 19:38:54.245580  466465 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1016 19:38:54.251605  466465 out.go:252]   - Generating certificates and keys ...
	I1016 19:38:54.251707  466465 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 19:38:54.251788  466465 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 19:38:54.776907  466465 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1016 19:38:55.206157  466465 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1016 19:38:56.247403  466465 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1016 19:38:56.627893  466465 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1016 19:38:57.077008  466465 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1016 19:38:57.077399  466465 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-663330] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1016 19:38:57.387981  466465 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1016 19:38:57.388335  466465 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-663330] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1016 19:38:58.241156  466465 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 19:38:58.636157  466465 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 19:38:58.963748  466465 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 19:38:58.964063  466465 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 19:38:59.462089  466465 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 19:38:59.725535  466465 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 19:39:00.777860  466465 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 19:39:00.967335  466465 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 19:39:00.968156  466465 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 19:39:00.970917  466465 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 19:39:00.974745  466465 out.go:252]   - Booting up control plane ...
	I1016 19:39:00.974848  466465 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 19:39:00.974929  466465 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 19:39:00.975000  466465 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 19:39:00.999227  466465 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 19:39:01.000252  466465 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 19:39:01.000308  466465 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 19:39:01.145802  466465 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1016 19:39:09.657639  466465 kubeadm.go:318] [apiclient] All control plane components are healthy after 8.512328 seconds
	I1016 19:39:09.657766  466465 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 19:39:09.702496  466465 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 19:39:10.249209  466465 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 19:39:10.249439  466465 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-663330 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 19:39:10.762998  466465 kubeadm.go:318] [bootstrap-token] Using token: mun6dx.xumcrjb7x222g1mg
	I1016 19:39:10.766021  466465 out.go:252]   - Configuring RBAC rules ...
	I1016 19:39:10.766155  466465 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 19:39:10.771882  466465 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 19:39:10.787398  466465 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 19:39:10.790364  466465 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 19:39:10.797116  466465 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 19:39:10.801627  466465 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 19:39:10.826340  466465 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 19:39:11.126612  466465 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 19:39:11.178019  466465 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 19:39:11.179532  466465 kubeadm.go:318] 
	I1016 19:39:11.179606  466465 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 19:39:11.179613  466465 kubeadm.go:318] 
	I1016 19:39:11.179693  466465 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 19:39:11.179698  466465 kubeadm.go:318] 
	I1016 19:39:11.179725  466465 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 19:39:11.180169  466465 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 19:39:11.180230  466465 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 19:39:11.180236  466465 kubeadm.go:318] 
	I1016 19:39:11.180292  466465 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 19:39:11.180296  466465 kubeadm.go:318] 
	I1016 19:39:11.180347  466465 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 19:39:11.180351  466465 kubeadm.go:318] 
	I1016 19:39:11.180406  466465 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 19:39:11.180484  466465 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 19:39:11.180556  466465 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 19:39:11.180560  466465 kubeadm.go:318] 
	I1016 19:39:11.180649  466465 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 19:39:11.180729  466465 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 19:39:11.180733  466465 kubeadm.go:318] 
	I1016 19:39:11.180821  466465 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token mun6dx.xumcrjb7x222g1mg \
	I1016 19:39:11.180937  466465 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:85a5ed38e8d77f9704e1d1edb99d03efd3921606f6351dbe0cfb02fc48526847 \
	I1016 19:39:11.180960  466465 kubeadm.go:318] 	--control-plane 
	I1016 19:39:11.180965  466465 kubeadm.go:318] 
	I1016 19:39:11.181058  466465 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 19:39:11.181064  466465 kubeadm.go:318] 
	I1016 19:39:11.181182  466465 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token mun6dx.xumcrjb7x222g1mg \
	I1016 19:39:11.181290  466465 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:85a5ed38e8d77f9704e1d1edb99d03efd3921606f6351dbe0cfb02fc48526847 
	I1016 19:39:11.192998  466465 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1016 19:39:11.193271  466465 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1016 19:39:11.193322  466465 cni.go:84] Creating CNI manager for ""
	I1016 19:39:11.193344  466465 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:39:11.197727  466465 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 19:39:11.200715  466465 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 19:39:11.206321  466465 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1016 19:39:11.206339  466465 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 19:39:11.241091  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 19:39:12.247821  466465 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.006638796s)
	I1016 19:39:12.247861  466465 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 19:39:12.247988  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:12.248088  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-663330 minikube.k8s.io/updated_at=2025_10_16T19_39_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=old-k8s-version-663330 minikube.k8s.io/primary=true
	I1016 19:39:12.404396  466465 ops.go:34] apiserver oom_adj: -16
	I1016 19:39:12.404505  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:12.904802  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:13.404739  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:13.904863  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:14.405243  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:14.904842  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:15.405121  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:15.905431  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:16.405272  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:16.904655  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:17.405292  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:17.905317  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:18.404630  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:18.904599  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:19.405467  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:19.905258  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:20.405470  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:20.905478  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:21.404848  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:21.904594  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:22.405261  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:22.905617  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:23.405479  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:23.904621  466465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:39:24.034609  466465 kubeadm.go:1113] duration metric: took 11.786664463s to wait for elevateKubeSystemPrivileges
	I1016 19:39:24.034642  466465 kubeadm.go:402] duration metric: took 30.251874295s to StartCluster
	I1016 19:39:24.034680  466465 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:39:24.034775  466465 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:39:24.035813  466465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:39:24.036068  466465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 19:39:24.036092  466465 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 19:39:24.036152  466465 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-663330"
	I1016 19:39:24.036169  466465 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-663330"
	I1016 19:39:24.036073  466465 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:39:24.036189  466465 host.go:66] Checking if "old-k8s-version-663330" exists ...
	I1016 19:39:24.036741  466465 config.go:182] Loaded profile config "old-k8s-version-663330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1016 19:39:24.036785  466465 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-663330"
	I1016 19:39:24.036809  466465 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-663330"
	I1016 19:39:24.037084  466465 cli_runner.go:164] Run: docker container inspect old-k8s-version-663330 --format={{.State.Status}}
	I1016 19:39:24.037339  466465 cli_runner.go:164] Run: docker container inspect old-k8s-version-663330 --format={{.State.Status}}
	I1016 19:39:24.041094  466465 out.go:179] * Verifying Kubernetes components...
	I1016 19:39:24.044332  466465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:39:24.077732  466465 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-663330"
	I1016 19:39:24.077774  466465 host.go:66] Checking if "old-k8s-version-663330" exists ...
	I1016 19:39:24.078209  466465 cli_runner.go:164] Run: docker container inspect old-k8s-version-663330 --format={{.State.Status}}
	I1016 19:39:24.079304  466465 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 19:39:24.082290  466465 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:39:24.082315  466465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 19:39:24.082388  466465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-663330
	I1016 19:39:24.113483  466465 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 19:39:24.113511  466465 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 19:39:24.113576  466465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-663330
	I1016 19:39:24.128322  466465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/old-k8s-version-663330/id_rsa Username:docker}
	I1016 19:39:24.144876  466465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/old-k8s-version-663330/id_rsa Username:docker}
	I1016 19:39:24.293876  466465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 19:39:24.301870  466465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:39:24.322625  466465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 19:39:24.346655  466465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:39:24.838939  466465 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1016 19:39:24.841843  466465 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-663330" to be "Ready" ...
	I1016 19:39:25.126296  466465 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1016 19:39:25.129978  466465 addons.go:514] duration metric: took 1.093865727s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1016 19:39:25.343075  466465 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-663330" context rescaled to 1 replicas
	W1016 19:39:26.845807  466465 node_ready.go:57] node "old-k8s-version-663330" has "Ready":"False" status (will retry)
	W1016 19:39:29.345752  466465 node_ready.go:57] node "old-k8s-version-663330" has "Ready":"False" status (will retry)
	W1016 19:39:31.346316  466465 node_ready.go:57] node "old-k8s-version-663330" has "Ready":"False" status (will retry)
	W1016 19:39:33.845194  466465 node_ready.go:57] node "old-k8s-version-663330" has "Ready":"False" status (will retry)
	W1016 19:39:36.345639  466465 node_ready.go:57] node "old-k8s-version-663330" has "Ready":"False" status (will retry)
	I1016 19:39:38.844832  466465 node_ready.go:49] node "old-k8s-version-663330" is "Ready"
	I1016 19:39:38.844864  466465 node_ready.go:38] duration metric: took 14.002955601s for node "old-k8s-version-663330" to be "Ready" ...
	I1016 19:39:38.844878  466465 api_server.go:52] waiting for apiserver process to appear ...
	I1016 19:39:38.844937  466465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 19:39:38.873488  466465 api_server.go:72] duration metric: took 14.837290069s to wait for apiserver process to appear ...
	I1016 19:39:38.873514  466465 api_server.go:88] waiting for apiserver healthz status ...
	I1016 19:39:38.873533  466465 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 19:39:38.882245  466465 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1016 19:39:38.883694  466465 api_server.go:141] control plane version: v1.28.0
	I1016 19:39:38.883719  466465 api_server.go:131] duration metric: took 10.197674ms to wait for apiserver health ...
	I1016 19:39:38.883729  466465 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 19:39:38.887258  466465 system_pods.go:59] 8 kube-system pods found
	I1016 19:39:38.887296  466465 system_pods.go:61] "coredns-5dd5756b68-vqfrr" [27151385-5082-44db-85d8-d01128019b89] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:39:38.887317  466465 system_pods.go:61] "etcd-old-k8s-version-663330" [04ca36a3-85e4-4634-88c0-4c378a0ca598] Running
	I1016 19:39:38.887324  466465 system_pods.go:61] "kindnet-br5zb" [896d1ca7-13c9-4d96-b69c-e0563244f2cc] Running
	I1016 19:39:38.887329  466465 system_pods.go:61] "kube-apiserver-old-k8s-version-663330" [46f58550-a22d-43fd-b011-06421fd0fa42] Running
	I1016 19:39:38.887335  466465 system_pods.go:61] "kube-controller-manager-old-k8s-version-663330" [e1bda887-2cf9-48e1-8fb4-0070a5cff2a6] Running
	I1016 19:39:38.887339  466465 system_pods.go:61] "kube-proxy-7fvsr" [7ba5690e-a465-4b69-85dd-cbaf095ec1f6] Running
	I1016 19:39:38.887344  466465 system_pods.go:61] "kube-scheduler-old-k8s-version-663330" [ab7bbfa7-318f-4d06-a11b-ddf13ed80045] Running
	I1016 19:39:38.887356  466465 system_pods.go:61] "storage-provisioner" [9ec01780-72cc-4fa0-a7b8-b049a6cc173e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:39:38.887362  466465 system_pods.go:74] duration metric: took 3.627359ms to wait for pod list to return data ...
	I1016 19:39:38.887373  466465 default_sa.go:34] waiting for default service account to be created ...
	I1016 19:39:38.889694  466465 default_sa.go:45] found service account: "default"
	I1016 19:39:38.889720  466465 default_sa.go:55] duration metric: took 2.340169ms for default service account to be created ...
	I1016 19:39:38.889730  466465 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 19:39:38.893379  466465 system_pods.go:86] 8 kube-system pods found
	I1016 19:39:38.893417  466465 system_pods.go:89] "coredns-5dd5756b68-vqfrr" [27151385-5082-44db-85d8-d01128019b89] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:39:38.893424  466465 system_pods.go:89] "etcd-old-k8s-version-663330" [04ca36a3-85e4-4634-88c0-4c378a0ca598] Running
	I1016 19:39:38.893431  466465 system_pods.go:89] "kindnet-br5zb" [896d1ca7-13c9-4d96-b69c-e0563244f2cc] Running
	I1016 19:39:38.893435  466465 system_pods.go:89] "kube-apiserver-old-k8s-version-663330" [46f58550-a22d-43fd-b011-06421fd0fa42] Running
	I1016 19:39:38.893441  466465 system_pods.go:89] "kube-controller-manager-old-k8s-version-663330" [e1bda887-2cf9-48e1-8fb4-0070a5cff2a6] Running
	I1016 19:39:38.893446  466465 system_pods.go:89] "kube-proxy-7fvsr" [7ba5690e-a465-4b69-85dd-cbaf095ec1f6] Running
	I1016 19:39:38.893450  466465 system_pods.go:89] "kube-scheduler-old-k8s-version-663330" [ab7bbfa7-318f-4d06-a11b-ddf13ed80045] Running
	I1016 19:39:38.893462  466465 system_pods.go:89] "storage-provisioner" [9ec01780-72cc-4fa0-a7b8-b049a6cc173e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:39:38.893488  466465 retry.go:31] will retry after 272.054305ms: missing components: kube-dns
	I1016 19:39:39.169424  466465 system_pods.go:86] 8 kube-system pods found
	I1016 19:39:39.169461  466465 system_pods.go:89] "coredns-5dd5756b68-vqfrr" [27151385-5082-44db-85d8-d01128019b89] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:39:39.169468  466465 system_pods.go:89] "etcd-old-k8s-version-663330" [04ca36a3-85e4-4634-88c0-4c378a0ca598] Running
	I1016 19:39:39.169476  466465 system_pods.go:89] "kindnet-br5zb" [896d1ca7-13c9-4d96-b69c-e0563244f2cc] Running
	I1016 19:39:39.169481  466465 system_pods.go:89] "kube-apiserver-old-k8s-version-663330" [46f58550-a22d-43fd-b011-06421fd0fa42] Running
	I1016 19:39:39.169485  466465 system_pods.go:89] "kube-controller-manager-old-k8s-version-663330" [e1bda887-2cf9-48e1-8fb4-0070a5cff2a6] Running
	I1016 19:39:39.169489  466465 system_pods.go:89] "kube-proxy-7fvsr" [7ba5690e-a465-4b69-85dd-cbaf095ec1f6] Running
	I1016 19:39:39.169493  466465 system_pods.go:89] "kube-scheduler-old-k8s-version-663330" [ab7bbfa7-318f-4d06-a11b-ddf13ed80045] Running
	I1016 19:39:39.169499  466465 system_pods.go:89] "storage-provisioner" [9ec01780-72cc-4fa0-a7b8-b049a6cc173e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:39:39.169513  466465 retry.go:31] will retry after 363.774152ms: missing components: kube-dns
	I1016 19:39:39.546858  466465 system_pods.go:86] 8 kube-system pods found
	I1016 19:39:39.546893  466465 system_pods.go:89] "coredns-5dd5756b68-vqfrr" [27151385-5082-44db-85d8-d01128019b89] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:39:39.546901  466465 system_pods.go:89] "etcd-old-k8s-version-663330" [04ca36a3-85e4-4634-88c0-4c378a0ca598] Running
	I1016 19:39:39.546910  466465 system_pods.go:89] "kindnet-br5zb" [896d1ca7-13c9-4d96-b69c-e0563244f2cc] Running
	I1016 19:39:39.546915  466465 system_pods.go:89] "kube-apiserver-old-k8s-version-663330" [46f58550-a22d-43fd-b011-06421fd0fa42] Running
	I1016 19:39:39.546920  466465 system_pods.go:89] "kube-controller-manager-old-k8s-version-663330" [e1bda887-2cf9-48e1-8fb4-0070a5cff2a6] Running
	I1016 19:39:39.546925  466465 system_pods.go:89] "kube-proxy-7fvsr" [7ba5690e-a465-4b69-85dd-cbaf095ec1f6] Running
	I1016 19:39:39.546929  466465 system_pods.go:89] "kube-scheduler-old-k8s-version-663330" [ab7bbfa7-318f-4d06-a11b-ddf13ed80045] Running
	I1016 19:39:39.546939  466465 system_pods.go:89] "storage-provisioner" [9ec01780-72cc-4fa0-a7b8-b049a6cc173e] Running
	I1016 19:39:39.546954  466465 retry.go:31] will retry after 388.793303ms: missing components: kube-dns
	I1016 19:39:39.940404  466465 system_pods.go:86] 8 kube-system pods found
	I1016 19:39:39.940439  466465 system_pods.go:89] "coredns-5dd5756b68-vqfrr" [27151385-5082-44db-85d8-d01128019b89] Running
	I1016 19:39:39.940449  466465 system_pods.go:89] "etcd-old-k8s-version-663330" [04ca36a3-85e4-4634-88c0-4c378a0ca598] Running
	I1016 19:39:39.940454  466465 system_pods.go:89] "kindnet-br5zb" [896d1ca7-13c9-4d96-b69c-e0563244f2cc] Running
	I1016 19:39:39.940459  466465 system_pods.go:89] "kube-apiserver-old-k8s-version-663330" [46f58550-a22d-43fd-b011-06421fd0fa42] Running
	I1016 19:39:39.940465  466465 system_pods.go:89] "kube-controller-manager-old-k8s-version-663330" [e1bda887-2cf9-48e1-8fb4-0070a5cff2a6] Running
	I1016 19:39:39.940468  466465 system_pods.go:89] "kube-proxy-7fvsr" [7ba5690e-a465-4b69-85dd-cbaf095ec1f6] Running
	I1016 19:39:39.940473  466465 system_pods.go:89] "kube-scheduler-old-k8s-version-663330" [ab7bbfa7-318f-4d06-a11b-ddf13ed80045] Running
	I1016 19:39:39.940477  466465 system_pods.go:89] "storage-provisioner" [9ec01780-72cc-4fa0-a7b8-b049a6cc173e] Running
	I1016 19:39:39.940485  466465 system_pods.go:126] duration metric: took 1.050750238s to wait for k8s-apps to be running ...
	I1016 19:39:39.940496  466465 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 19:39:39.940560  466465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:39:39.958037  466465 system_svc.go:56] duration metric: took 17.531834ms WaitForService to wait for kubelet
	I1016 19:39:39.958066  466465 kubeadm.go:586] duration metric: took 15.921874681s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:39:39.958088  466465 node_conditions.go:102] verifying NodePressure condition ...
	I1016 19:39:39.960878  466465 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 19:39:39.960914  466465 node_conditions.go:123] node cpu capacity is 2
	I1016 19:39:39.960929  466465 node_conditions.go:105] duration metric: took 2.835279ms to run NodePressure ...
	I1016 19:39:39.960942  466465 start.go:241] waiting for startup goroutines ...
	I1016 19:39:39.960949  466465 start.go:246] waiting for cluster config update ...
	I1016 19:39:39.960960  466465 start.go:255] writing updated cluster config ...
	I1016 19:39:39.961316  466465 ssh_runner.go:195] Run: rm -f paused
	I1016 19:39:39.965010  466465 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:39:39.969482  466465 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vqfrr" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:39:39.974808  466465 pod_ready.go:94] pod "coredns-5dd5756b68-vqfrr" is "Ready"
	I1016 19:39:39.974836  466465 pod_ready.go:86] duration metric: took 5.329337ms for pod "coredns-5dd5756b68-vqfrr" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:39:39.978054  466465 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-663330" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:39:39.983259  466465 pod_ready.go:94] pod "etcd-old-k8s-version-663330" is "Ready"
	I1016 19:39:39.983289  466465 pod_ready.go:86] duration metric: took 5.210353ms for pod "etcd-old-k8s-version-663330" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:39:39.986423  466465 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-663330" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:39:39.991561  466465 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-663330" is "Ready"
	I1016 19:39:39.991587  466465 pod_ready.go:86] duration metric: took 5.136515ms for pod "kube-apiserver-old-k8s-version-663330" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:39:39.994539  466465 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-663330" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:39:40.369236  466465 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-663330" is "Ready"
	I1016 19:39:40.369274  466465 pod_ready.go:86] duration metric: took 374.70727ms for pod "kube-controller-manager-old-k8s-version-663330" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:39:40.570132  466465 pod_ready.go:83] waiting for pod "kube-proxy-7fvsr" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:39:40.968864  466465 pod_ready.go:94] pod "kube-proxy-7fvsr" is "Ready"
	I1016 19:39:40.968892  466465 pod_ready.go:86] duration metric: took 398.730707ms for pod "kube-proxy-7fvsr" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:39:41.169923  466465 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-663330" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:39:41.570153  466465 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-663330" is "Ready"
	I1016 19:39:41.570180  466465 pod_ready.go:86] duration metric: took 400.230728ms for pod "kube-scheduler-old-k8s-version-663330" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:39:41.570192  466465 pod_ready.go:40] duration metric: took 1.605150064s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:39:41.631246  466465 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1016 19:39:41.634349  466465 out.go:203] 
	W1016 19:39:41.637266  466465 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1016 19:39:41.640407  466465 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1016 19:39:41.644054  466465 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-663330" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 16 19:39:39 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:39.036276564Z" level=info msg="Starting container: 3856e131d6483cb461c544f51cca5bdd7047595dd114122464403dadb653f4ab" id=c7c9b968-bf0d-400c-b036-39462a126f95 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:39:39 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:39.04206764Z" level=info msg="Started container" PID=1926 containerID=15c8c794705df6cde6c7571c4444f24fb19bd6d22da5f824b26afca60816d90a description=kube-system/coredns-5dd5756b68-vqfrr/coredns id=a9f06918-0f51-46b9-89ef-f59659c508c9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bf92b54e528dd1176bb50f2b0178c15e3c6bf01e69a2904642bed83c92b96c22
	Oct 16 19:39:39 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:39.04268714Z" level=info msg="Started container" PID=1925 containerID=3856e131d6483cb461c544f51cca5bdd7047595dd114122464403dadb653f4ab description=kube-system/storage-provisioner/storage-provisioner id=c7c9b968-bf0d-400c-b036-39462a126f95 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5c42b834779703bd313eb61f4a9641fa9bfcc7801bd39cadc531c7e09aa3ef22
	Oct 16 19:39:43 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:43.671307872Z" level=info msg="Running pod sandbox: default/busybox/POD" id=40120c1b-6dc9-4bda-a355-8b44917e870f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:39:43 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:43.671382949Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:39:43 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:43.678780003Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:122cb2082da20c65f0e8ec7a55d3d3f213d7db9b21334a3b5c597bf982ff6b19 UID:78750ccf-b912-4d16-9de5-1a8f1089eeb8 NetNS:/var/run/netns/0add34ce-cb90-4b33-bbc3-4628193d09b1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001324968}] Aliases:map[]}"
	Oct 16 19:39:43 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:43.678820807Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 16 19:39:43 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:43.68995846Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:122cb2082da20c65f0e8ec7a55d3d3f213d7db9b21334a3b5c597bf982ff6b19 UID:78750ccf-b912-4d16-9de5-1a8f1089eeb8 NetNS:/var/run/netns/0add34ce-cb90-4b33-bbc3-4628193d09b1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001324968}] Aliases:map[]}"
	Oct 16 19:39:43 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:43.690204911Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 16 19:39:43 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:43.696010502Z" level=info msg="Ran pod sandbox 122cb2082da20c65f0e8ec7a55d3d3f213d7db9b21334a3b5c597bf982ff6b19 with infra container: default/busybox/POD" id=40120c1b-6dc9-4bda-a355-8b44917e870f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:39:43 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:43.697110966Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=027d3bbf-7d17-4a8e-8976-842de513dc8e name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:39:43 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:43.697337766Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=027d3bbf-7d17-4a8e-8976-842de513dc8e name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:39:43 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:43.697388909Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=027d3bbf-7d17-4a8e-8976-842de513dc8e name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:39:43 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:43.698261301Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0a85636b-cc75-457a-8fc0-01ffebeb5a29 name=/runtime.v1.ImageService/PullImage
	Oct 16 19:39:43 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:43.701207662Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 16 19:39:45 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:45.684768943Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=0a85636b-cc75-457a-8fc0-01ffebeb5a29 name=/runtime.v1.ImageService/PullImage
	Oct 16 19:39:45 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:45.688479216Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0de2184d-340f-401c-9a9d-4bcea1fa4cb0 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:39:45 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:45.691094448Z" level=info msg="Creating container: default/busybox/busybox" id=a31408fa-1416-4ceb-8932-d17de3cb1b39 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:39:45 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:45.691892674Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:39:45 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:45.69674748Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:39:45 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:45.697444208Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:39:45 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:45.735722607Z" level=info msg="Created container 3cc9f3c5446f8eb7345daa73874cdb8ce0c1df2bbfac7973e2ab2fb22fce37e1: default/busybox/busybox" id=a31408fa-1416-4ceb-8932-d17de3cb1b39 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:39:45 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:45.73968551Z" level=info msg="Starting container: 3cc9f3c5446f8eb7345daa73874cdb8ce0c1df2bbfac7973e2ab2fb22fce37e1" id=0c211f4b-1bcd-4c66-b81c-57ccfacbcfeb name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:39:45 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:45.741562291Z" level=info msg="Started container" PID=1987 containerID=3cc9f3c5446f8eb7345daa73874cdb8ce0c1df2bbfac7973e2ab2fb22fce37e1 description=default/busybox/busybox id=0c211f4b-1bcd-4c66-b81c-57ccfacbcfeb name=/runtime.v1.RuntimeService/StartContainer sandboxID=122cb2082da20c65f0e8ec7a55d3d3f213d7db9b21334a3b5c597bf982ff6b19
	Oct 16 19:39:53 old-k8s-version-663330 crio[840]: time="2025-10-16T19:39:53.000677308Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	3cc9f3c5446f8       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   122cb2082da20       busybox                                          default
	15c8c794705df       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      15 seconds ago      Running             coredns                   0                   bf92b54e528dd       coredns-5dd5756b68-vqfrr                         kube-system
	3856e131d6483       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      15 seconds ago      Running             storage-provisioner       0                   5c42b83477970       storage-provisioner                              kube-system
	7844435cd37a9       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    26 seconds ago      Running             kindnet-cni               0                   e7e8b29e677f5       kindnet-br5zb                                    kube-system
	11ace967621b1       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      28 seconds ago      Running             kube-proxy                0                   7bfb261d722e3       kube-proxy-7fvsr                                 kube-system
	89a7e7648a74c       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      51 seconds ago      Running             kube-scheduler            0                   b46805464b1c9       kube-scheduler-old-k8s-version-663330            kube-system
	66ffa5b5b6f15       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      51 seconds ago      Running             kube-apiserver            0                   272ac36e34e6e       kube-apiserver-old-k8s-version-663330            kube-system
	8edff5816ac8a       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      51 seconds ago      Running             kube-controller-manager   0                   23f4a0af6334b       kube-controller-manager-old-k8s-version-663330   kube-system
	6c7618d1725b9       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      51 seconds ago      Running             etcd                      0                   50dd41e252608       etcd-old-k8s-version-663330                      kube-system
	
	
	==> coredns [15c8c794705df6cde6c7571c4444f24fb19bd6d22da5f824b26afca60816d90a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53920 - 57329 "HINFO IN 7959073451467730580.1648221237852840293. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015989168s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-663330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-663330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=old-k8s-version-663330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T19_39_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 19:39:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-663330
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:39:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:39:42 +0000   Thu, 16 Oct 2025 19:39:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:39:42 +0000   Thu, 16 Oct 2025 19:39:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:39:42 +0000   Thu, 16 Oct 2025 19:39:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 19:39:42 +0000   Thu, 16 Oct 2025 19:39:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-663330
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                1d0ae713-f566-4024-8f13-ca98591cb606
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-5dd5756b68-vqfrr                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     31s
	  kube-system                 etcd-old-k8s-version-663330                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         43s
	  kube-system                 kindnet-br5zb                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-old-k8s-version-663330             250m (12%)    0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-old-k8s-version-663330    200m (10%)    0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-7fvsr                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-old-k8s-version-663330             100m (5%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node old-k8s-version-663330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node old-k8s-version-663330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)  kubelet          Node old-k8s-version-663330 status is now: NodeHasSufficientPID
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s                kubelet          Node old-k8s-version-663330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s                kubelet          Node old-k8s-version-663330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s                kubelet          Node old-k8s-version-663330 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                node-controller  Node old-k8s-version-663330 event: Registered Node old-k8s-version-663330 in Controller
	  Normal  NodeReady                16s                kubelet          Node old-k8s-version-663330 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct16 19:09] overlayfs: idmapped layers are currently not supported
	[Oct16 19:11] overlayfs: idmapped layers are currently not supported
	[Oct16 19:16] overlayfs: idmapped layers are currently not supported
	[ +33.922450] overlayfs: idmapped layers are currently not supported
	[Oct16 19:18] overlayfs: idmapped layers are currently not supported
	[Oct16 19:19] overlayfs: idmapped layers are currently not supported
	[Oct16 19:20] overlayfs: idmapped layers are currently not supported
	[Oct16 19:21] overlayfs: idmapped layers are currently not supported
	[Oct16 19:22] overlayfs: idmapped layers are currently not supported
	[  +5.025487] overlayfs: idmapped layers are currently not supported
	[Oct16 19:23] overlayfs: idmapped layers are currently not supported
	[ +28.397927] overlayfs: idmapped layers are currently not supported
	[Oct16 19:24] overlayfs: idmapped layers are currently not supported
	[ +25.533019] overlayfs: idmapped layers are currently not supported
	[Oct16 19:26] overlayfs: idmapped layers are currently not supported
	[Oct16 19:27] overlayfs: idmapped layers are currently not supported
	[Oct16 19:29] overlayfs: idmapped layers are currently not supported
	[Oct16 19:31] overlayfs: idmapped layers are currently not supported
	[Oct16 19:32] overlayfs: idmapped layers are currently not supported
	[Oct16 19:34] overlayfs: idmapped layers are currently not supported
	[Oct16 19:36] overlayfs: idmapped layers are currently not supported
	[Oct16 19:37] overlayfs: idmapped layers are currently not supported
	[  +8.490329] overlayfs: idmapped layers are currently not supported
	[Oct16 19:38] overlayfs: idmapped layers are currently not supported
	[Oct16 19:39] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6c7618d1725b968a9a0b748203db1e9ce58edcece96eec643d1fb671a2f0233f] <==
	{"level":"info","ts":"2025-10-16T19:39:02.671567Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-16T19:39:02.672399Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-16T19:39:02.672363Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-16T19:39:02.671789Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-16T19:39:02.672494Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-16T19:39:02.671597Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-16T19:39:02.673418Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-16T19:39:03.637215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-16T19:39:03.637384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-16T19:39:03.637448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-10-16T19:39:03.637487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-10-16T19:39:03.637525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-16T19:39:03.637567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-10-16T19:39:03.637617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-16T19:39:03.640664Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-663330 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-16T19:39:03.64078Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-16T19:39:03.642193Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-16T19:39:03.642702Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-16T19:39:03.645326Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-16T19:39:03.646035Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-16T19:39:03.646197Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-16T19:39:03.64627Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-16T19:39:03.649001Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-16T19:39:03.649083Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-16T19:39:03.650207Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:39:54 up  2:22,  0 user,  load average: 2.45, 3.32, 2.74
	Linux old-k8s-version-663330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7844435cd37a9dd5cd9f3030512822c9703de36f07376bb391df8e108d480408] <==
	I1016 19:39:28.216464       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 19:39:28.305462       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1016 19:39:28.305628       1 main.go:148] setting mtu 1500 for CNI 
	I1016 19:39:28.305646       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 19:39:28.305658       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T19:39:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 19:39:28.517578       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 19:39:28.517666       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 19:39:28.517705       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 19:39:28.605412       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1016 19:39:28.718184       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 19:39:28.718215       1 metrics.go:72] Registering metrics
	I1016 19:39:28.718279       1 controller.go:711] "Syncing nftables rules"
	I1016 19:39:38.515365       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:39:38.515414       1 main.go:301] handling current node
	I1016 19:39:48.509524       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:39:48.509587       1 main.go:301] handling current node
	
	
	==> kube-apiserver [66ffa5b5b6f154f543743adda653c244cec6a0553d47c40c7d075e0be613f96c] <==
	I1016 19:39:07.881536       1 aggregator.go:166] initial CRD sync complete...
	I1016 19:39:07.881568       1 autoregister_controller.go:141] Starting autoregister controller
	I1016 19:39:07.881594       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 19:39:07.881645       1 cache.go:39] Caches are synced for autoregister controller
	I1016 19:39:07.882916       1 shared_informer.go:318] Caches are synced for configmaps
	I1016 19:39:07.892864       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1016 19:39:07.894359       1 controller.go:624] quota admission added evaluator for: namespaces
	I1016 19:39:07.907365       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1016 19:39:07.950193       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1016 19:39:07.965480       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 19:39:08.601507       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1016 19:39:08.607754       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1016 19:39:08.607838       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 19:39:09.455578       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 19:39:09.523274       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 19:39:09.668883       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1016 19:39:09.697397       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1016 19:39:09.698779       1 controller.go:624] quota admission added evaluator for: endpoints
	I1016 19:39:09.720762       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 19:39:09.776202       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1016 19:39:11.105106       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1016 19:39:11.124574       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1016 19:39:11.137356       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1016 19:39:23.706356       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1016 19:39:23.740111       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [8edff5816ac8a619edabc3056c966ee3002403161b2aedca5c027eaccaddb6ef] <==
	I1016 19:39:23.787722       1 event.go:307] "Event occurred" object="old-k8s-version-663330" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-663330 event: Registered Node old-k8s-version-663330 in Controller"
	I1016 19:39:23.789860       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7fvsr"
	I1016 19:39:23.840018       1 event.go:307] "Event occurred" object="kube-system/etcd-old-k8s-version-663330" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1016 19:39:23.842131       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-663330" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1016 19:39:23.842273       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-old-k8s-version-663330" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1016 19:39:23.862481       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-z4hbf"
	I1016 19:39:23.862621       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-old-k8s-version-663330" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1016 19:39:23.891956       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vqfrr"
	I1016 19:39:23.937127       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="201.411004ms"
	I1016 19:39:23.984254       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.89916ms"
	I1016 19:39:23.985539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="49.214µs"
	I1016 19:39:24.115294       1 shared_informer.go:318] Caches are synced for garbage collector
	I1016 19:39:24.116475       1 shared_informer.go:318] Caches are synced for garbage collector
	I1016 19:39:24.116523       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1016 19:39:24.911735       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1016 19:39:24.955316       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-z4hbf"
	I1016 19:39:25.005425       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.497043ms"
	I1016 19:39:25.049985       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.49413ms"
	I1016 19:39:25.084932       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="34.189012ms"
	I1016 19:39:25.085131       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.542µs"
	I1016 19:39:38.653006       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.162µs"
	I1016 19:39:38.675709       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.264µs"
	I1016 19:39:38.787745       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1016 19:39:39.564202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.419089ms"
	I1016 19:39:39.564393       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.764µs"
	
	
	==> kube-proxy [11ace967621b10a3012910a58281bc76cc98e6f1675f85ead61e9c89b68216a6] <==
	I1016 19:39:25.738330       1 server_others.go:69] "Using iptables proxy"
	I1016 19:39:25.752236       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1016 19:39:25.772913       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 19:39:25.774817       1 server_others.go:152] "Using iptables Proxier"
	I1016 19:39:25.774857       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1016 19:39:25.774870       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1016 19:39:25.774906       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1016 19:39:25.775102       1 server.go:846] "Version info" version="v1.28.0"
	I1016 19:39:25.775118       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:39:25.776076       1 config.go:188] "Starting service config controller"
	I1016 19:39:25.776173       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1016 19:39:25.776283       1 config.go:97] "Starting endpoint slice config controller"
	I1016 19:39:25.776314       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1016 19:39:25.777052       1 config.go:315] "Starting node config controller"
	I1016 19:39:25.779059       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1016 19:39:25.876597       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1016 19:39:25.876720       1 shared_informer.go:318] Caches are synced for service config
	I1016 19:39:25.881272       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [89a7e7648a74cd6b8bdb1811862c260ebe641cea69ca421f541137f0d848ff76] <==
	I1016 19:39:08.369023       1 serving.go:348] Generated self-signed cert in-memory
	I1016 19:39:09.862692       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1016 19:39:09.862724       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:39:09.868330       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1016 19:39:09.869295       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1016 19:39:09.869325       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1016 19:39:09.869352       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1016 19:39:09.873597       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:39:09.873625       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1016 19:39:09.873642       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:39:09.873649       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1016 19:39:09.969441       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1016 19:39:09.974335       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1016 19:39:09.974420       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 16 19:39:24 old-k8s-version-663330 kubelet[1364]: E1016 19:39:24.970821    1364 projected.go:198] Error preparing data for projected volume kube-api-access-c2zpq for pod kube-system/kindnet-br5zb: failed to sync configmap cache: timed out waiting for the condition
	Oct 16 19:39:24 old-k8s-version-663330 kubelet[1364]: E1016 19:39:24.970898    1364 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/896d1ca7-13c9-4d96-b69c-e0563244f2cc-kube-api-access-c2zpq podName:896d1ca7-13c9-4d96-b69c-e0563244f2cc nodeName:}" failed. No retries permitted until 2025-10-16 19:39:25.470876858 +0000 UTC m=+14.403780545 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c2zpq" (UniqueName: "kubernetes.io/projected/896d1ca7-13c9-4d96-b69c-e0563244f2cc-kube-api-access-c2zpq") pod "kindnet-br5zb" (UID: "896d1ca7-13c9-4d96-b69c-e0563244f2cc") : failed to sync configmap cache: timed out waiting for the condition
	Oct 16 19:39:24 old-k8s-version-663330 kubelet[1364]: E1016 19:39:24.973383    1364 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 16 19:39:24 old-k8s-version-663330 kubelet[1364]: E1016 19:39:24.973428    1364 projected.go:198] Error preparing data for projected volume kube-api-access-g8jrg for pod kube-system/kube-proxy-7fvsr: failed to sync configmap cache: timed out waiting for the condition
	Oct 16 19:39:24 old-k8s-version-663330 kubelet[1364]: E1016 19:39:24.973494    1364 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7ba5690e-a465-4b69-85dd-cbaf095ec1f6-kube-api-access-g8jrg podName:7ba5690e-a465-4b69-85dd-cbaf095ec1f6 nodeName:}" failed. No retries permitted until 2025-10-16 19:39:25.473473497 +0000 UTC m=+14.406377184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g8jrg" (UniqueName: "kubernetes.io/projected/7ba5690e-a465-4b69-85dd-cbaf095ec1f6-kube-api-access-g8jrg") pod "kube-proxy-7fvsr" (UID: "7ba5690e-a465-4b69-85dd-cbaf095ec1f6") : failed to sync configmap cache: timed out waiting for the condition
	Oct 16 19:39:25 old-k8s-version-663330 kubelet[1364]: W1016 19:39:25.629931    1364 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178/crio-7bfb261d722e37e0912adf23b7cc71c0f1d0159b1e1ca8d68580772f2ce058e6 WatchSource:0}: Error finding container 7bfb261d722e37e0912adf23b7cc71c0f1d0159b1e1ca8d68580772f2ce058e6: Status 404 returned error can't find the container with id 7bfb261d722e37e0912adf23b7cc71c0f1d0159b1e1ca8d68580772f2ce058e6
	Oct 16 19:39:28 old-k8s-version-663330 kubelet[1364]: I1016 19:39:28.502919    1364 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7fvsr" podStartSLOduration=5.502865462 podCreationTimestamp="2025-10-16 19:39:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:39:26.48711705 +0000 UTC m=+15.420020736" watchObservedRunningTime="2025-10-16 19:39:28.502865462 +0000 UTC m=+17.435769149"
	Oct 16 19:39:31 old-k8s-version-663330 kubelet[1364]: I1016 19:39:31.329213    1364 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-br5zb" podStartSLOduration=5.828896078 podCreationTimestamp="2025-10-16 19:39:23 +0000 UTC" firstStartedPulling="2025-10-16 19:39:25.609882238 +0000 UTC m=+14.542785925" lastFinishedPulling="2025-10-16 19:39:28.110151225 +0000 UTC m=+17.043054912" observedRunningTime="2025-10-16 19:39:28.50427001 +0000 UTC m=+17.437173705" watchObservedRunningTime="2025-10-16 19:39:31.329165065 +0000 UTC m=+20.262068776"
	Oct 16 19:39:38 old-k8s-version-663330 kubelet[1364]: I1016 19:39:38.617128    1364 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 16 19:39:38 old-k8s-version-663330 kubelet[1364]: I1016 19:39:38.651507    1364 topology_manager.go:215] "Topology Admit Handler" podUID="27151385-5082-44db-85d8-d01128019b89" podNamespace="kube-system" podName="coredns-5dd5756b68-vqfrr"
	Oct 16 19:39:38 old-k8s-version-663330 kubelet[1364]: I1016 19:39:38.662412    1364 topology_manager.go:215] "Topology Admit Handler" podUID="9ec01780-72cc-4fa0-a7b8-b049a6cc173e" podNamespace="kube-system" podName="storage-provisioner"
	Oct 16 19:39:38 old-k8s-version-663330 kubelet[1364]: I1016 19:39:38.747529    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2gd2\" (UniqueName: \"kubernetes.io/projected/9ec01780-72cc-4fa0-a7b8-b049a6cc173e-kube-api-access-s2gd2\") pod \"storage-provisioner\" (UID: \"9ec01780-72cc-4fa0-a7b8-b049a6cc173e\") " pod="kube-system/storage-provisioner"
	Oct 16 19:39:38 old-k8s-version-663330 kubelet[1364]: I1016 19:39:38.747591    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9ec01780-72cc-4fa0-a7b8-b049a6cc173e-tmp\") pod \"storage-provisioner\" (UID: \"9ec01780-72cc-4fa0-a7b8-b049a6cc173e\") " pod="kube-system/storage-provisioner"
	Oct 16 19:39:38 old-k8s-version-663330 kubelet[1364]: I1016 19:39:38.747622    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kknvf\" (UniqueName: \"kubernetes.io/projected/27151385-5082-44db-85d8-d01128019b89-kube-api-access-kknvf\") pod \"coredns-5dd5756b68-vqfrr\" (UID: \"27151385-5082-44db-85d8-d01128019b89\") " pod="kube-system/coredns-5dd5756b68-vqfrr"
	Oct 16 19:39:38 old-k8s-version-663330 kubelet[1364]: I1016 19:39:38.747647    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27151385-5082-44db-85d8-d01128019b89-config-volume\") pod \"coredns-5dd5756b68-vqfrr\" (UID: \"27151385-5082-44db-85d8-d01128019b89\") " pod="kube-system/coredns-5dd5756b68-vqfrr"
	Oct 16 19:39:39 old-k8s-version-663330 kubelet[1364]: I1016 19:39:39.549794    1364 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.54974029 podCreationTimestamp="2025-10-16 19:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:39:39.52681219 +0000 UTC m=+28.459715876" watchObservedRunningTime="2025-10-16 19:39:39.54974029 +0000 UTC m=+28.482643977"
	Oct 16 19:39:41 old-k8s-version-663330 kubelet[1364]: I1016 19:39:41.868210    1364 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vqfrr" podStartSLOduration=18.868151304 podCreationTimestamp="2025-10-16 19:39:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:39:39.553666909 +0000 UTC m=+28.486570596" watchObservedRunningTime="2025-10-16 19:39:41.868151304 +0000 UTC m=+30.801055015"
	Oct 16 19:39:41 old-k8s-version-663330 kubelet[1364]: I1016 19:39:41.869112    1364 topology_manager.go:215] "Topology Admit Handler" podUID="78750ccf-b912-4d16-9de5-1a8f1089eeb8" podNamespace="default" podName="busybox"
	Oct 16 19:39:41 old-k8s-version-663330 kubelet[1364]: W1016 19:39:41.877379    1364 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-663330" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-663330' and this object
	Oct 16 19:39:41 old-k8s-version-663330 kubelet[1364]: E1016 19:39:41.877590    1364 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-663330" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-663330' and this object
	Oct 16 19:39:41 old-k8s-version-663330 kubelet[1364]: I1016 19:39:41.977273    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwfrx\" (UniqueName: \"kubernetes.io/projected/78750ccf-b912-4d16-9de5-1a8f1089eeb8-kube-api-access-zwfrx\") pod \"busybox\" (UID: \"78750ccf-b912-4d16-9de5-1a8f1089eeb8\") " pod="default/busybox"
	Oct 16 19:39:43 old-k8s-version-663330 kubelet[1364]: E1016 19:39:43.088968    1364 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 16 19:39:43 old-k8s-version-663330 kubelet[1364]: E1016 19:39:43.089021    1364 projected.go:198] Error preparing data for projected volume kube-api-access-zwfrx for pod default/busybox: failed to sync configmap cache: timed out waiting for the condition
	Oct 16 19:39:43 old-k8s-version-663330 kubelet[1364]: E1016 19:39:43.089407    1364 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78750ccf-b912-4d16-9de5-1a8f1089eeb8-kube-api-access-zwfrx podName:78750ccf-b912-4d16-9de5-1a8f1089eeb8 nodeName:}" failed. No retries permitted until 2025-10-16 19:39:43.589076727 +0000 UTC m=+32.521980414 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zwfrx" (UniqueName: "kubernetes.io/projected/78750ccf-b912-4d16-9de5-1a8f1089eeb8-kube-api-access-zwfrx") pod "busybox" (UID: "78750ccf-b912-4d16-9de5-1a8f1089eeb8") : failed to sync configmap cache: timed out waiting for the condition
	Oct 16 19:39:43 old-k8s-version-663330 kubelet[1364]: W1016 19:39:43.695562    1364 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178/crio-122cb2082da20c65f0e8ec7a55d3d3f213d7db9b21334a3b5c597bf982ff6b19 WatchSource:0}: Error finding container 122cb2082da20c65f0e8ec7a55d3d3f213d7db9b21334a3b5c597bf982ff6b19: Status 404 returned error can't find the container with id 122cb2082da20c65f0e8ec7a55d3d3f213d7db9b21334a3b5c597bf982ff6b19
	
	
	==> storage-provisioner [3856e131d6483cb461c544f51cca5bdd7047595dd114122464403dadb653f4ab] <==
	I1016 19:39:39.069779       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 19:39:39.083654       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 19:39:39.083728       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1016 19:39:39.093190       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 19:39:39.093637       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ea29456-c19e-483f-960c-d85113c7aa2e", APIVersion:"v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-663330_fdbb4b8d-bfad-4a4b-a091-9b0d9dc466aa became leader
	I1016 19:39:39.095492       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-663330_fdbb4b8d-bfad-4a4b-a091-9b0d9dc466aa!
	I1016 19:39:39.198417       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-663330_fdbb4b8d-bfad-4a4b-a091-9b0d9dc466aa!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-663330 -n old-k8s-version-663330
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-663330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (8.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-663330 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-663330 --alsologtostderr -v=1: exit status 80 (2.068953316s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-663330 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 19:41:10.883997  472941 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:41:10.884101  472941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:41:10.884107  472941 out.go:374] Setting ErrFile to fd 2...
	I1016 19:41:10.884111  472941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:41:10.884374  472941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:41:10.884607  472941 out.go:368] Setting JSON to false
	I1016 19:41:10.884625  472941 mustload.go:65] Loading cluster: old-k8s-version-663330
	I1016 19:41:10.884979  472941 config.go:182] Loaded profile config "old-k8s-version-663330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1016 19:41:10.885571  472941 cli_runner.go:164] Run: docker container inspect old-k8s-version-663330 --format={{.State.Status}}
	I1016 19:41:10.911988  472941 host.go:66] Checking if "old-k8s-version-663330" exists ...
	I1016 19:41:10.912409  472941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:41:11.005512  472941 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-16 19:41:10.99535772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:41:11.006260  472941 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-663330 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1016 19:41:11.009783  472941 out.go:179] * Pausing node old-k8s-version-663330 ... 
	I1016 19:41:11.012703  472941 host.go:66] Checking if "old-k8s-version-663330" exists ...
	I1016 19:41:11.013165  472941 ssh_runner.go:195] Run: systemctl --version
	I1016 19:41:11.013260  472941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-663330
	I1016 19:41:11.042667  472941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/old-k8s-version-663330/id_rsa Username:docker}
	I1016 19:41:11.153661  472941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:41:11.179487  472941 pause.go:52] kubelet running: true
	I1016 19:41:11.179617  472941 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 19:41:11.460605  472941 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 19:41:11.460710  472941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 19:41:11.547854  472941 cri.go:89] found id: "cd6d956ac80482a1f6bcfd367df30bb090e7bdac8f8122b9e06803726d3d4015"
	I1016 19:41:11.547872  472941 cri.go:89] found id: "9e2febbb05c3f2f1d0b7636d3a32baf4c41042a9ed63eda2b1f9db102f12f8e7"
	I1016 19:41:11.547876  472941 cri.go:89] found id: "83d913eaf88e4605f8517296d5310c6a465cdae0f4f71ad50a666244e2417d90"
	I1016 19:41:11.547880  472941 cri.go:89] found id: "9c425ef1360ca51d09afcd876feca2bc97e4e424b623de6bdcadcc122a937383"
	I1016 19:41:11.547884  472941 cri.go:89] found id: "f48070f990185c1ad93ea2494701be1ee1f88ad24465a040b14b88c4121179b2"
	I1016 19:41:11.547888  472941 cri.go:89] found id: "ca9f3c035162136550e79b6c0014343408983870ed9af23ed59991d9a05e9e3e"
	I1016 19:41:11.547891  472941 cri.go:89] found id: "eb71114d965337f8d2433dfb6782af47091510173170df63ce2c629eed64d425"
	I1016 19:41:11.547895  472941 cri.go:89] found id: "e4d549de261d3aa9d5926c279931f54e3030d5f24546a2b72e6f2aa811185db2"
	I1016 19:41:11.547898  472941 cri.go:89] found id: "d276f51870b2329c6a418b3790b4e14cdc49f20c8f1e281021038d55047a959f"
	I1016 19:41:11.547903  472941 cri.go:89] found id: "6c720d140a0e148385d07934721f94a02453ced5980c91dade254009be3878bb"
	I1016 19:41:11.547907  472941 cri.go:89] found id: "61125ed3a2c1c03d74ee146ef4ad2c1ade1a0a25c74fb8e34dd2f73b5e7b97bd"
	I1016 19:41:11.547910  472941 cri.go:89] found id: ""
	I1016 19:41:11.547957  472941 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 19:41:11.570301  472941 retry.go:31] will retry after 183.710823ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:41:11Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:41:11.757286  472941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:41:11.790951  472941 pause.go:52] kubelet running: false
	I1016 19:41:11.791016  472941 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 19:41:12.006991  472941 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 19:41:12.007078  472941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 19:41:12.084085  472941 cri.go:89] found id: "cd6d956ac80482a1f6bcfd367df30bb090e7bdac8f8122b9e06803726d3d4015"
	I1016 19:41:12.084108  472941 cri.go:89] found id: "9e2febbb05c3f2f1d0b7636d3a32baf4c41042a9ed63eda2b1f9db102f12f8e7"
	I1016 19:41:12.084113  472941 cri.go:89] found id: "83d913eaf88e4605f8517296d5310c6a465cdae0f4f71ad50a666244e2417d90"
	I1016 19:41:12.084117  472941 cri.go:89] found id: "9c425ef1360ca51d09afcd876feca2bc97e4e424b623de6bdcadcc122a937383"
	I1016 19:41:12.084121  472941 cri.go:89] found id: "f48070f990185c1ad93ea2494701be1ee1f88ad24465a040b14b88c4121179b2"
	I1016 19:41:12.084125  472941 cri.go:89] found id: "ca9f3c035162136550e79b6c0014343408983870ed9af23ed59991d9a05e9e3e"
	I1016 19:41:12.084128  472941 cri.go:89] found id: "eb71114d965337f8d2433dfb6782af47091510173170df63ce2c629eed64d425"
	I1016 19:41:12.084131  472941 cri.go:89] found id: "e4d549de261d3aa9d5926c279931f54e3030d5f24546a2b72e6f2aa811185db2"
	I1016 19:41:12.084135  472941 cri.go:89] found id: "d276f51870b2329c6a418b3790b4e14cdc49f20c8f1e281021038d55047a959f"
	I1016 19:41:12.084141  472941 cri.go:89] found id: "6c720d140a0e148385d07934721f94a02453ced5980c91dade254009be3878bb"
	I1016 19:41:12.084144  472941 cri.go:89] found id: "61125ed3a2c1c03d74ee146ef4ad2c1ade1a0a25c74fb8e34dd2f73b5e7b97bd"
	I1016 19:41:12.084148  472941 cri.go:89] found id: ""
	I1016 19:41:12.084202  472941 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 19:41:12.096390  472941 retry.go:31] will retry after 421.455682ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:41:12Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:41:12.519059  472941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:41:12.535230  472941 pause.go:52] kubelet running: false
	I1016 19:41:12.535289  472941 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 19:41:12.739104  472941 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 19:41:12.739180  472941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 19:41:12.826380  472941 cri.go:89] found id: "cd6d956ac80482a1f6bcfd367df30bb090e7bdac8f8122b9e06803726d3d4015"
	I1016 19:41:12.826398  472941 cri.go:89] found id: "9e2febbb05c3f2f1d0b7636d3a32baf4c41042a9ed63eda2b1f9db102f12f8e7"
	I1016 19:41:12.826403  472941 cri.go:89] found id: "83d913eaf88e4605f8517296d5310c6a465cdae0f4f71ad50a666244e2417d90"
	I1016 19:41:12.826406  472941 cri.go:89] found id: "9c425ef1360ca51d09afcd876feca2bc97e4e424b623de6bdcadcc122a937383"
	I1016 19:41:12.826409  472941 cri.go:89] found id: "f48070f990185c1ad93ea2494701be1ee1f88ad24465a040b14b88c4121179b2"
	I1016 19:41:12.826413  472941 cri.go:89] found id: "ca9f3c035162136550e79b6c0014343408983870ed9af23ed59991d9a05e9e3e"
	I1016 19:41:12.826416  472941 cri.go:89] found id: "eb71114d965337f8d2433dfb6782af47091510173170df63ce2c629eed64d425"
	I1016 19:41:12.826419  472941 cri.go:89] found id: "e4d549de261d3aa9d5926c279931f54e3030d5f24546a2b72e6f2aa811185db2"
	I1016 19:41:12.826435  472941 cri.go:89] found id: "d276f51870b2329c6a418b3790b4e14cdc49f20c8f1e281021038d55047a959f"
	I1016 19:41:12.826446  472941 cri.go:89] found id: "6c720d140a0e148385d07934721f94a02453ced5980c91dade254009be3878bb"
	I1016 19:41:12.826449  472941 cri.go:89] found id: "61125ed3a2c1c03d74ee146ef4ad2c1ade1a0a25c74fb8e34dd2f73b5e7b97bd"
	I1016 19:41:12.826452  472941 cri.go:89] found id: ""
	I1016 19:41:12.826513  472941 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 19:41:12.845118  472941 out.go:203] 
	W1016 19:41:12.847986  472941 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:41:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:41:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 19:41:12.848010  472941 out.go:285] * 
	* 
	W1016 19:41:12.859027  472941 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 19:41:12.862489  472941 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-663330 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-663330
helpers_test.go:243: (dbg) docker inspect old-k8s-version-663330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178",
	        "Created": "2025-10-16T19:38:44.050016018Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470194,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T19:40:08.149462107Z",
	            "FinishedAt": "2025-10-16T19:40:07.284235749Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178/hostname",
	        "HostsPath": "/var/lib/docker/containers/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178/hosts",
	        "LogPath": "/var/lib/docker/containers/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178-json.log",
	        "Name": "/old-k8s-version-663330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-663330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-663330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178",
	                "LowerDir": "/var/lib/docker/overlay2/91ff1676dfb24263837902c7cf6d793de5cfeecee80400165619f3b3bc9dd706-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/91ff1676dfb24263837902c7cf6d793de5cfeecee80400165619f3b3bc9dd706/merged",
	                "UpperDir": "/var/lib/docker/overlay2/91ff1676dfb24263837902c7cf6d793de5cfeecee80400165619f3b3bc9dd706/diff",
	                "WorkDir": "/var/lib/docker/overlay2/91ff1676dfb24263837902c7cf6d793de5cfeecee80400165619f3b3bc9dd706/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-663330",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-663330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-663330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-663330",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-663330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4fcdf46f21f595b10589acd4d6d7e3d7296b723f51f0f9c6ecfd0deb97b97870",
	            "SandboxKey": "/var/run/docker/netns/4fcdf46f21f5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-663330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:d0:65:5e:8a:60",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "954005e57e721c33bcbbc4d582e61219818cba738fb17844a562d84e477b2115",
	                    "EndpointID": "6513cf01b463764416dc47985f669c2adbfb172bff19c98a3262642d83f56031",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-663330",
	                        "99b40d8e6d48"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-663330 -n old-k8s-version-663330
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-663330 -n old-k8s-version-663330: exit status 2 (428.674623ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-663330 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-663330 logs -n 25: (2.141666718s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-078761 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo containerd config dump                                                                                                                                                                                                  │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo crio config                                                                                                                                                                                                             │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ delete  │ -p cilium-078761                                                                                                                                                                                                                              │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │ 16 Oct 25 19:37 UTC │
	│ start   │ -p cert-expiration-828182 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-828182   │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │ 16 Oct 25 19:38 UTC │
	│ delete  │ -p force-systemd-env-871877                                                                                                                                                                                                                   │ force-systemd-env-871877 │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │ 16 Oct 25 19:37 UTC │
	│ start   │ -p cert-options-853056 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-853056      │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │ 16 Oct 25 19:38 UTC │
	│ ssh     │ cert-options-853056 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-853056      │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:38 UTC │
	│ ssh     │ -p cert-options-853056 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-853056      │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:38 UTC │
	│ delete  │ -p cert-options-853056                                                                                                                                                                                                                        │ cert-options-853056      │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:38 UTC │
	│ start   │ -p old-k8s-version-663330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:39 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-663330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:39 UTC │                     │
	│ stop    │ -p old-k8s-version-663330 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:39 UTC │ 16 Oct 25 19:40 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-663330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:40 UTC │ 16 Oct 25 19:40 UTC │
	│ start   │ -p old-k8s-version-663330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:40 UTC │ 16 Oct 25 19:40 UTC │
	│ start   │ -p cert-expiration-828182 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-828182   │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │                     │
	│ image   │ old-k8s-version-663330 image list --format=json                                                                                                                                                                                               │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-663330 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 19:41:01
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 19:41:01.063095  472198 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:41:01.063237  472198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:41:01.063242  472198 out.go:374] Setting ErrFile to fd 2...
	I1016 19:41:01.063246  472198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:41:01.063513  472198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:41:01.063908  472198 out.go:368] Setting JSON to false
	I1016 19:41:01.064894  472198 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8590,"bootTime":1760635071,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 19:41:01.064950  472198 start.go:141] virtualization:  
	I1016 19:41:01.068399  472198 out.go:179] * [cert-expiration-828182] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 19:41:01.072205  472198 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 19:41:01.072286  472198 notify.go:220] Checking for updates...
	I1016 19:41:01.075389  472198 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 19:41:01.078445  472198 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:41:01.081399  472198 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 19:41:01.084308  472198 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 19:41:01.087319  472198 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 19:41:01.091037  472198 config.go:182] Loaded profile config "cert-expiration-828182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:41:01.091684  472198 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 19:41:01.128102  472198 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 19:41:01.128214  472198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:41:01.191860  472198 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-16 19:41:01.181989018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:41:01.191957  472198 docker.go:318] overlay module found
	I1016 19:41:01.195247  472198 out.go:179] * Using the docker driver based on existing profile
	I1016 19:41:01.198432  472198 start.go:305] selected driver: docker
	I1016 19:41:01.198444  472198 start.go:925] validating driver "docker" against &{Name:cert-expiration-828182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-828182 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:41:01.198576  472198 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 19:41:01.199391  472198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:41:01.262447  472198 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-16 19:41:01.251396403 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:41:01.262807  472198 cni.go:84] Creating CNI manager for ""
	I1016 19:41:01.262869  472198 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:41:01.262910  472198 start.go:349] cluster config:
	{Name:cert-expiration-828182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-828182 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1016 19:41:01.266352  472198 out.go:179] * Starting "cert-expiration-828182" primary control-plane node in "cert-expiration-828182" cluster
	I1016 19:41:01.269104  472198 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 19:41:01.272080  472198 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 19:41:01.275172  472198 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:41:01.275243  472198 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 19:41:01.275256  472198 cache.go:58] Caching tarball of preloaded images
	I1016 19:41:01.275258  472198 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 19:41:01.275342  472198 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 19:41:01.275350  472198 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 19:41:01.275448  472198 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/config.json ...
	I1016 19:41:01.297233  472198 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 19:41:01.297245  472198 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 19:41:01.297257  472198 cache.go:232] Successfully downloaded all kic artifacts
	I1016 19:41:01.297278  472198 start.go:360] acquireMachinesLock for cert-expiration-828182: {Name:mke633a1e943d77132d294d88356b824676d1e34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:41:01.297337  472198 start.go:364] duration metric: took 40.189µs to acquireMachinesLock for "cert-expiration-828182"
	I1016 19:41:01.297355  472198 start.go:96] Skipping create...Using existing machine configuration
	I1016 19:41:01.297359  472198 fix.go:54] fixHost starting: 
	I1016 19:41:01.297608  472198 cli_runner.go:164] Run: docker container inspect cert-expiration-828182 --format={{.State.Status}}
	I1016 19:41:01.315587  472198 fix.go:112] recreateIfNeeded on cert-expiration-828182: state=Running err=<nil>
	W1016 19:41:01.315607  472198 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 19:41:01.318732  472198 out.go:252] * Updating the running docker "cert-expiration-828182" container ...
	I1016 19:41:01.318756  472198 machine.go:93] provisionDockerMachine start ...
	I1016 19:41:01.318845  472198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-828182
	I1016 19:41:01.343663  472198 main.go:141] libmachine: Using SSH client type: native
	I1016 19:41:01.344045  472198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1016 19:41:01.344053  472198 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 19:41:01.494048  472198 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-828182
	
	I1016 19:41:01.494070  472198 ubuntu.go:182] provisioning hostname "cert-expiration-828182"
	I1016 19:41:01.494136  472198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-828182
	I1016 19:41:01.513531  472198 main.go:141] libmachine: Using SSH client type: native
	I1016 19:41:01.513849  472198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1016 19:41:01.513858  472198 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-828182 && echo "cert-expiration-828182" | sudo tee /etc/hostname
	I1016 19:41:01.672805  472198 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-828182
	
	I1016 19:41:01.672888  472198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-828182
	I1016 19:41:01.692835  472198 main.go:141] libmachine: Using SSH client type: native
	I1016 19:41:01.693180  472198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1016 19:41:01.693196  472198 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-828182' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-828182/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-828182' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 19:41:01.842447  472198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 19:41:01.842473  472198 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 19:41:01.842497  472198 ubuntu.go:190] setting up certificates
	I1016 19:41:01.842513  472198 provision.go:84] configureAuth start
	I1016 19:41:01.842573  472198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-828182
	I1016 19:41:01.867168  472198 provision.go:143] copyHostCerts
	I1016 19:41:01.867228  472198 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 19:41:01.867242  472198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 19:41:01.867318  472198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 19:41:01.867433  472198 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 19:41:01.867437  472198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 19:41:01.867468  472198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 19:41:01.867529  472198 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 19:41:01.867532  472198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 19:41:01.867554  472198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 19:41:01.867607  472198 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-828182 san=[127.0.0.1 192.168.85.2 cert-expiration-828182 localhost minikube]
	I1016 19:41:02.480547  472198 provision.go:177] copyRemoteCerts
	I1016 19:41:02.480601  472198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 19:41:02.480649  472198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-828182
	I1016 19:41:02.503611  472198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/cert-expiration-828182/id_rsa Username:docker}
	I1016 19:41:02.610523  472198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 19:41:02.634081  472198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1016 19:41:02.653301  472198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 19:41:02.672629  472198 provision.go:87] duration metric: took 830.093571ms to configureAuth
	I1016 19:41:02.672658  472198 ubuntu.go:206] setting minikube options for container-runtime
	I1016 19:41:02.672856  472198 config.go:182] Loaded profile config "cert-expiration-828182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:41:02.672957  472198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-828182
	I1016 19:41:02.691522  472198 main.go:141] libmachine: Using SSH client type: native
	I1016 19:41:02.691852  472198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1016 19:41:02.691875  472198 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 19:41:08.048332  472198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 19:41:08.048345  472198 machine.go:96] duration metric: took 6.729582891s to provisionDockerMachine
	I1016 19:41:08.048354  472198 start.go:293] postStartSetup for "cert-expiration-828182" (driver="docker")
	I1016 19:41:08.048364  472198 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 19:41:08.048426  472198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 19:41:08.048476  472198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-828182
	I1016 19:41:08.067079  472198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/cert-expiration-828182/id_rsa Username:docker}
	I1016 19:41:08.169055  472198 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 19:41:08.172544  472198 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 19:41:08.172561  472198 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 19:41:08.172571  472198 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 19:41:08.172624  472198 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 19:41:08.172709  472198 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 19:41:08.172813  472198 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 19:41:08.183213  472198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:41:08.202837  472198 start.go:296] duration metric: took 154.309285ms for postStartSetup
	I1016 19:41:08.202923  472198 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 19:41:08.202994  472198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-828182
	I1016 19:41:08.232023  472198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/cert-expiration-828182/id_rsa Username:docker}
	I1016 19:41:08.338906  472198 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 19:41:08.345246  472198 fix.go:56] duration metric: took 7.047872175s for fixHost
	I1016 19:41:08.345260  472198 start.go:83] releasing machines lock for "cert-expiration-828182", held for 7.04791604s
	I1016 19:41:08.345334  472198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-828182
	I1016 19:41:08.363676  472198 ssh_runner.go:195] Run: cat /version.json
	I1016 19:41:08.363718  472198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-828182
	I1016 19:41:08.363721  472198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 19:41:08.363769  472198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-828182
	I1016 19:41:08.392183  472198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/cert-expiration-828182/id_rsa Username:docker}
	I1016 19:41:08.401249  472198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/cert-expiration-828182/id_rsa Username:docker}
	I1016 19:41:08.492985  472198 ssh_runner.go:195] Run: systemctl --version
	I1016 19:41:08.612400  472198 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 19:41:08.667930  472198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 19:41:08.672527  472198 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 19:41:08.672600  472198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 19:41:08.681759  472198 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 19:41:08.681772  472198 start.go:495] detecting cgroup driver to use...
	I1016 19:41:08.681804  472198 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 19:41:08.681850  472198 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 19:41:08.697474  472198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 19:41:08.711148  472198 docker.go:218] disabling cri-docker service (if available) ...
	I1016 19:41:08.711206  472198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 19:41:08.728898  472198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 19:41:08.743061  472198 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 19:41:08.893809  472198 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 19:41:09.053903  472198 docker.go:234] disabling docker service ...
	I1016 19:41:09.053973  472198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 19:41:09.069382  472198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 19:41:09.085630  472198 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 19:41:09.232872  472198 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 19:41:09.387062  472198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 19:41:09.400384  472198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 19:41:09.415293  472198 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 19:41:09.415370  472198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:09.425030  472198 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 19:41:09.425102  472198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:09.434679  472198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:09.443966  472198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:09.461367  472198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 19:41:09.471907  472198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:09.481606  472198 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:09.490210  472198 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:09.499438  472198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 19:41:09.507014  472198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 19:41:09.516450  472198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:41:09.664217  472198 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 19:41:09.852904  472198 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 19:41:09.852967  472198 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 19:41:09.857175  472198 start.go:563] Will wait 60s for crictl version
	I1016 19:41:09.857241  472198 ssh_runner.go:195] Run: which crictl
	I1016 19:41:09.861059  472198 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 19:41:09.892390  472198 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 19:41:09.892486  472198 ssh_runner.go:195] Run: crio --version
	I1016 19:41:09.921718  472198 ssh_runner.go:195] Run: crio --version
	I1016 19:41:09.962225  472198 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 19:41:09.965274  472198 cli_runner.go:164] Run: docker network inspect cert-expiration-828182 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:41:09.979965  472198 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1016 19:41:09.983979  472198 kubeadm.go:883] updating cluster {Name:cert-expiration-828182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-828182 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 19:41:09.984089  472198 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:41:09.984146  472198 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:41:10.031418  472198 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:41:10.031430  472198 crio.go:433] Images already preloaded, skipping extraction
	I1016 19:41:10.031495  472198 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:41:10.061742  472198 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:41:10.061755  472198 cache_images.go:85] Images are preloaded, skipping loading
	I1016 19:41:10.061761  472198 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1016 19:41:10.061919  472198 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-828182 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-828182 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 19:41:10.062009  472198 ssh_runner.go:195] Run: crio config
	I1016 19:41:10.125794  472198 cni.go:84] Creating CNI manager for ""
	I1016 19:41:10.125806  472198 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:41:10.125822  472198 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 19:41:10.125843  472198 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-828182 NodeName:cert-expiration-828182 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 19:41:10.126006  472198 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-828182"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 19:41:10.126074  472198 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 19:41:10.135760  472198 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 19:41:10.135832  472198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 19:41:10.143643  472198 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1016 19:41:10.156506  472198 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 19:41:10.169449  472198 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1016 19:41:10.184150  472198 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1016 19:41:10.188395  472198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:41:10.339871  472198 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:41:10.355586  472198 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182 for IP: 192.168.85.2
	I1016 19:41:10.355598  472198 certs.go:195] generating shared ca certs ...
	I1016 19:41:10.355612  472198 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:10.355758  472198 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 19:41:10.355795  472198 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 19:41:10.355801  472198 certs.go:257] generating profile certs ...
	W1016 19:41:10.355926  472198 out.go:285] ! Certificate client.crt has expired. Generating a new one...
	I1016 19:41:10.355950  472198 certs.go:624] cert expired /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/client.crt: expiration: 2025-10-16 19:40:35 +0000 UTC, now: 2025-10-16 19:41:10.355944687 +0000 UTC m=+9.339877465
	I1016 19:41:10.356055  472198 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/client.key
	I1016 19:41:10.356069  472198 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/client.crt with IP's: []
	I1016 19:41:10.727526  472198 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/client.crt ...
	I1016 19:41:10.727542  472198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/client.crt: {Name:mk018de438ee3946a3b6dcaf0ac6ccaeff1e56c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:10.727752  472198 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/client.key ...
	I1016 19:41:10.727761  472198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/client.key: {Name:mk88c9970afc913a3e219a838f242272de71563d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1016 19:41:10.727975  472198 out.go:285] ! Certificate apiserver.crt.f9319a21 has expired. Generating a new one...
	I1016 19:41:10.728060  472198 certs.go:624] cert expired /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/apiserver.crt.f9319a21: expiration: 2025-10-16 19:40:35 +0000 UTC, now: 2025-10-16 19:41:10.728052681 +0000 UTC m=+9.711985467
	I1016 19:41:10.728173  472198 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/apiserver.key.f9319a21
	I1016 19:41:10.728188  472198 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/apiserver.crt.f9319a21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	
	
	==> CRI-O <==
	Oct 16 19:40:56 old-k8s-version-663330 crio[653]: time="2025-10-16T19:40:56.073179267Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:40:56 old-k8s-version-663330 crio[653]: time="2025-10-16T19:40:56.085756304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:40:56 old-k8s-version-663330 crio[653]: time="2025-10-16T19:40:56.086925758Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:40:56 old-k8s-version-663330 crio[653]: time="2025-10-16T19:40:56.105780191Z" level=info msg="Created container 6c720d140a0e148385d07934721f94a02453ced5980c91dade254009be3878bb: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kccdb/dashboard-metrics-scraper" id=8cec1c10-6f1f-4231-a4c8-a0e8bcf24021 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:40:56 old-k8s-version-663330 crio[653]: time="2025-10-16T19:40:56.107027709Z" level=info msg="Starting container: 6c720d140a0e148385d07934721f94a02453ced5980c91dade254009be3878bb" id=d1b1b804-b3df-4225-b36e-79218a8bc033 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:40:56 old-k8s-version-663330 crio[653]: time="2025-10-16T19:40:56.10905121Z" level=info msg="Started container" PID=1631 containerID=6c720d140a0e148385d07934721f94a02453ced5980c91dade254009be3878bb description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kccdb/dashboard-metrics-scraper id=d1b1b804-b3df-4225-b36e-79218a8bc033 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dec1fcaa6ee976c81b1d8b72a774d462e36e894bb06ae06a1270b77061964074
	Oct 16 19:40:56 old-k8s-version-663330 conmon[1629]: conmon 6c720d140a0e148385d0 <ninfo>: container 1631 exited with status 1
	Oct 16 19:40:56 old-k8s-version-663330 crio[653]: time="2025-10-16T19:40:56.307904448Z" level=info msg="Removing container: abcbcc8cfc239104405f51a62c429a4edaa979d3ef31d8e3d90a6217da3300ca" id=582367a4-0ff5-48a0-b195-c5a34108ded3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:40:56 old-k8s-version-663330 crio[653]: time="2025-10-16T19:40:56.322548161Z" level=info msg="Error loading conmon cgroup of container abcbcc8cfc239104405f51a62c429a4edaa979d3ef31d8e3d90a6217da3300ca: cgroup deleted" id=582367a4-0ff5-48a0-b195-c5a34108ded3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:40:56 old-k8s-version-663330 crio[653]: time="2025-10-16T19:40:56.328169642Z" level=info msg="Removed container abcbcc8cfc239104405f51a62c429a4edaa979d3ef31d8e3d90a6217da3300ca: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kccdb/dashboard-metrics-scraper" id=582367a4-0ff5-48a0-b195-c5a34108ded3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.615231122Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.620222345Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.62026123Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.620284582Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.623684144Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.623852334Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.6238892Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.627240212Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.62727565Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.627301291Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.630620245Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.630693715Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.630719307Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.634094015Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.634139645Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	6c720d140a0e1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   dec1fcaa6ee97       dashboard-metrics-scraper-5f989dc9cf-kccdb       kubernetes-dashboard
	cd6d956ac8048       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   98fb720208806       storage-provisioner                              kube-system
	61125ed3a2c1c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   30 seconds ago      Running             kubernetes-dashboard        0                   1583d0cb0750e       kubernetes-dashboard-8694d4445c-8z9qd            kubernetes-dashboard
	9e2febbb05c3f       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           51 seconds ago      Running             coredns                     1                   3b13e8cd9dc8b       coredns-5dd5756b68-vqfrr                         kube-system
	83d913eaf88e4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   98fb720208806       storage-provisioner                              kube-system
	9c425ef1360ca       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           51 seconds ago      Running             kube-proxy                  1                   fc4b705d154b6       kube-proxy-7fvsr                                 kube-system
	f48070f990185       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   0e48fa61d13c6       kindnet-br5zb                                    kube-system
	80d9e9b9780ae       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   3a312851c8f83       busybox                                          default
	ca9f3c0351621       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           58 seconds ago      Running             kube-controller-manager     1                   f9ee4acc650d0       kube-controller-manager-old-k8s-version-663330   kube-system
	eb71114d96533       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           58 seconds ago      Running             kube-scheduler              1                   f4d980f103e34       kube-scheduler-old-k8s-version-663330            kube-system
	e4d549de261d3       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           58 seconds ago      Running             kube-apiserver              1                   4f0020cc7dcd5       kube-apiserver-old-k8s-version-663330            kube-system
	d276f51870b23       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           58 seconds ago      Running             etcd                        1                   0cbf91771546b       etcd-old-k8s-version-663330                      kube-system
	
	
	==> coredns [9e2febbb05c3f2f1d0b7636d3a32baf4c41042a9ed63eda2b1f9db102f12f8e7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41319 - 50623 "HINFO IN 7568889462730369034.6524799246419944909. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.046578537s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-663330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-663330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=old-k8s-version-663330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T19_39_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 19:39:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-663330
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:41:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:40:51 +0000   Thu, 16 Oct 2025 19:39:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:40:51 +0000   Thu, 16 Oct 2025 19:39:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:40:51 +0000   Thu, 16 Oct 2025 19:39:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 19:40:51 +0000   Thu, 16 Oct 2025 19:39:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-663330
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                1d0ae713-f566-4024-8f13-ca98591cb606
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-vqfrr                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-old-k8s-version-663330                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-br5zb                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-663330             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-663330    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-7fvsr                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-663330             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-kccdb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-8z9qd             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 108s                   kube-proxy       
	  Normal  Starting                 51s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-663330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-663330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-663330 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m3s                   kubelet          Node old-k8s-version-663330 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m3s                   kubelet          Node old-k8s-version-663330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s                   kubelet          Node old-k8s-version-663330 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s                   node-controller  Node old-k8s-version-663330 event: Registered Node old-k8s-version-663330 in Controller
	  Normal  NodeReady                96s                    kubelet          Node old-k8s-version-663330 status is now: NodeReady
	  Normal  Starting                 59s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)      kubelet          Node old-k8s-version-663330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)      kubelet          Node old-k8s-version-663330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)      kubelet          Node old-k8s-version-663330 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                    node-controller  Node old-k8s-version-663330 event: Registered Node old-k8s-version-663330 in Controller
	
	
	==> dmesg <==
	[Oct16 19:11] overlayfs: idmapped layers are currently not supported
	[Oct16 19:16] overlayfs: idmapped layers are currently not supported
	[ +33.922450] overlayfs: idmapped layers are currently not supported
	[Oct16 19:18] overlayfs: idmapped layers are currently not supported
	[Oct16 19:19] overlayfs: idmapped layers are currently not supported
	[Oct16 19:20] overlayfs: idmapped layers are currently not supported
	[Oct16 19:21] overlayfs: idmapped layers are currently not supported
	[Oct16 19:22] overlayfs: idmapped layers are currently not supported
	[  +5.025487] overlayfs: idmapped layers are currently not supported
	[Oct16 19:23] overlayfs: idmapped layers are currently not supported
	[ +28.397927] overlayfs: idmapped layers are currently not supported
	[Oct16 19:24] overlayfs: idmapped layers are currently not supported
	[ +25.533019] overlayfs: idmapped layers are currently not supported
	[Oct16 19:26] overlayfs: idmapped layers are currently not supported
	[Oct16 19:27] overlayfs: idmapped layers are currently not supported
	[Oct16 19:29] overlayfs: idmapped layers are currently not supported
	[Oct16 19:31] overlayfs: idmapped layers are currently not supported
	[Oct16 19:32] overlayfs: idmapped layers are currently not supported
	[Oct16 19:34] overlayfs: idmapped layers are currently not supported
	[Oct16 19:36] overlayfs: idmapped layers are currently not supported
	[Oct16 19:37] overlayfs: idmapped layers are currently not supported
	[  +8.490329] overlayfs: idmapped layers are currently not supported
	[Oct16 19:38] overlayfs: idmapped layers are currently not supported
	[Oct16 19:39] overlayfs: idmapped layers are currently not supported
	[Oct16 19:40] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d276f51870b2329c6a418b3790b4e14cdc49f20c8f1e281021038d55047a959f] <==
	{"level":"info","ts":"2025-10-16T19:40:15.806803Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-16T19:40:15.806814Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-16T19:40:15.807072Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-16T19:40:15.807128Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-16T19:40:15.807196Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-16T19:40:15.807221Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-16T19:40:15.810379Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-16T19:40:15.81176Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-16T19:40:15.811796Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-16T19:40:15.811846Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-16T19:40:15.811855Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-16T19:40:17.155169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-16T19:40:17.155251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-16T19:40:17.155287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-16T19:40:17.155311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-16T19:40:17.155319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-16T19:40:17.155332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-16T19:40:17.155347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-16T19:40:17.158521Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-16T19:40:17.160573Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-16T19:40:17.158487Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-663330 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-16T19:40:17.168432Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-16T19:40:17.168574Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-16T19:40:17.176208Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-16T19:40:17.186945Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:41:14 up  2:23,  0 user,  load average: 2.75, 3.19, 2.74
	Linux old-k8s-version-663330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f48070f990185c1ad93ea2494701be1ee1f88ad24465a040b14b88c4121179b2] <==
	I1016 19:40:23.409094       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 19:40:23.409553       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1016 19:40:23.409759       1 main.go:148] setting mtu 1500 for CNI 
	I1016 19:40:23.409805       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 19:40:23.409850       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T19:40:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 19:40:23.611376       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 19:40:23.611630       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 19:40:23.611743       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 19:40:23.611915       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1016 19:40:53.611479       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1016 19:40:53.611523       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1016 19:40:53.611646       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1016 19:40:53.611789       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1016 19:40:55.217223       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 19:40:55.217254       1 metrics.go:72] Registering metrics
	I1016 19:40:55.217324       1 controller.go:711] "Syncing nftables rules"
	I1016 19:41:03.614904       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:41:03.614972       1 main.go:301] handling current node
	I1016 19:41:13.617969       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:41:13.618008       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e4d549de261d3aa9d5926c279931f54e3030d5f24546a2b72e6f2aa811185db2] <==
	I1016 19:40:21.180703       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 19:40:21.196231       1 shared_informer.go:318] Caches are synced for configmaps
	I1016 19:40:21.196291       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1016 19:40:21.201917       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1016 19:40:21.201953       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1016 19:40:21.202063       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1016 19:40:21.202130       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1016 19:40:21.203323       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1016 19:40:21.205117       1 aggregator.go:166] initial CRD sync complete...
	I1016 19:40:21.205189       1 autoregister_controller.go:141] Starting autoregister controller
	I1016 19:40:21.205196       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 19:40:21.205204       1 cache.go:39] Caches are synced for autoregister controller
	I1016 19:40:21.246858       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1016 19:40:21.302701       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1016 19:40:21.693039       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 19:40:22.711121       1 controller.go:624] quota admission added evaluator for: namespaces
	I1016 19:40:22.758212       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1016 19:40:22.783713       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 19:40:22.795842       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 19:40:22.805855       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1016 19:40:22.867499       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.200.15"}
	I1016 19:40:22.885832       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.147.42"}
	I1016 19:40:33.687368       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 19:40:33.787647       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1016 19:40:33.991275       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ca9f3c035162136550e79b6c0014343408983870ed9af23ed59991d9a05e9e3e] <==
	I1016 19:40:33.795319       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1016 19:40:34.099489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="484.840463ms"
	I1016 19:40:34.099582       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="52.054µs"
	I1016 19:40:34.106663       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-kccdb"
	I1016 19:40:34.106704       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-8z9qd"
	I1016 19:40:34.123191       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="329.380313ms"
	I1016 19:40:34.123923       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="330.386845ms"
	I1016 19:40:34.124139       1 shared_informer.go:318] Caches are synced for garbage collector
	I1016 19:40:34.124177       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1016 19:40:34.133812       1 shared_informer.go:318] Caches are synced for garbage collector
	I1016 19:40:34.162170       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="38.150195ms"
	I1016 19:40:34.169968       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.649351ms"
	I1016 19:40:34.170085       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="74.282µs"
	I1016 19:40:34.173981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="86.393µs"
	I1016 19:40:34.182475       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="20.25475ms"
	I1016 19:40:34.182571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.413µs"
	I1016 19:40:39.270318       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.765µs"
	I1016 19:40:40.305613       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.638µs"
	I1016 19:40:41.285335       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="88.854µs"
	I1016 19:40:44.299728       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.313376ms"
	I1016 19:40:44.299827       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="56.722µs"
	I1016 19:40:56.324669       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.74µs"
	I1016 19:40:56.456528       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.616127ms"
	I1016 19:40:56.458297       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.547µs"
	I1016 19:41:04.433208       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.358µs"
	
	
	==> kube-proxy [9c425ef1360ca51d09afcd876feca2bc97e4e424b623de6bdcadcc122a937383] <==
	I1016 19:40:23.405707       1 server_others.go:69] "Using iptables proxy"
	I1016 19:40:23.426903       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1016 19:40:23.447178       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 19:40:23.450177       1 server_others.go:152] "Using iptables Proxier"
	I1016 19:40:23.450278       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1016 19:40:23.450310       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1016 19:40:23.450357       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1016 19:40:23.450582       1 server.go:846] "Version info" version="v1.28.0"
	I1016 19:40:23.450790       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:40:23.451494       1 config.go:188] "Starting service config controller"
	I1016 19:40:23.451573       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1016 19:40:23.451632       1 config.go:97] "Starting endpoint slice config controller"
	I1016 19:40:23.451670       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1016 19:40:23.452213       1 config.go:315] "Starting node config controller"
	I1016 19:40:23.452277       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1016 19:40:23.552282       1 shared_informer.go:318] Caches are synced for service config
	I1016 19:40:23.552292       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1016 19:40:23.552417       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [eb71114d965337f8d2433dfb6782af47091510173170df63ce2c629eed64d425] <==
	I1016 19:40:20.016669       1 serving.go:348] Generated self-signed cert in-memory
	I1016 19:40:21.289023       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1016 19:40:21.289060       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:40:21.342561       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1016 19:40:21.342656       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1016 19:40:21.342678       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1016 19:40:21.342698       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1016 19:40:21.363982       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:40:21.364941       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1016 19:40:21.364138       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:40:21.365943       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1016 19:40:21.445199       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1016 19:40:21.465267       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1016 19:40:21.466595       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Oct 16 19:40:34 old-k8s-version-663330 kubelet[777]: I1016 19:40:34.225995     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrzcg\" (UniqueName: \"kubernetes.io/projected/1ea37ce9-add3-4e74-8ad8-d0f92b64296d-kube-api-access-zrzcg\") pod \"dashboard-metrics-scraper-5f989dc9cf-kccdb\" (UID: \"1ea37ce9-add3-4e74-8ad8-d0f92b64296d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kccdb"
	Oct 16 19:40:34 old-k8s-version-663330 kubelet[777]: I1016 19:40:34.326675     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c01af607-d3e2-43d1-a893-02a2a8aabdeb-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-8z9qd\" (UID: \"c01af607-d3e2-43d1-a893-02a2a8aabdeb\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8z9qd"
	Oct 16 19:40:34 old-k8s-version-663330 kubelet[777]: I1016 19:40:34.326981     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msvwd\" (UniqueName: \"kubernetes.io/projected/c01af607-d3e2-43d1-a893-02a2a8aabdeb-kube-api-access-msvwd\") pod \"kubernetes-dashboard-8694d4445c-8z9qd\" (UID: \"c01af607-d3e2-43d1-a893-02a2a8aabdeb\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8z9qd"
	Oct 16 19:40:34 old-k8s-version-663330 kubelet[777]: W1016 19:40:34.452836     777 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178/crio-dec1fcaa6ee976c81b1d8b72a774d462e36e894bb06ae06a1270b77061964074 WatchSource:0}: Error finding container dec1fcaa6ee976c81b1d8b72a774d462e36e894bb06ae06a1270b77061964074: Status 404 returned error can't find the container with id dec1fcaa6ee976c81b1d8b72a774d462e36e894bb06ae06a1270b77061964074
	Oct 16 19:40:34 old-k8s-version-663330 kubelet[777]: W1016 19:40:34.772898     777 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178/crio-1583d0cb0750e83485df363d955c819bfe5c9042e5ab408383de7ae5f7136ae6 WatchSource:0}: Error finding container 1583d0cb0750e83485df363d955c819bfe5c9042e5ab408383de7ae5f7136ae6: Status 404 returned error can't find the container with id 1583d0cb0750e83485df363d955c819bfe5c9042e5ab408383de7ae5f7136ae6
	Oct 16 19:40:39 old-k8s-version-663330 kubelet[777]: I1016 19:40:39.252407     777 scope.go:117] "RemoveContainer" containerID="5d6a57949384ddbd2b95b6f33463826801f5004b52266b83584c53698bf81b70"
	Oct 16 19:40:40 old-k8s-version-663330 kubelet[777]: I1016 19:40:40.259001     777 scope.go:117] "RemoveContainer" containerID="5d6a57949384ddbd2b95b6f33463826801f5004b52266b83584c53698bf81b70"
	Oct 16 19:40:40 old-k8s-version-663330 kubelet[777]: I1016 19:40:40.259311     777 scope.go:117] "RemoveContainer" containerID="abcbcc8cfc239104405f51a62c429a4edaa979d3ef31d8e3d90a6217da3300ca"
	Oct 16 19:40:40 old-k8s-version-663330 kubelet[777]: E1016 19:40:40.260710     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kccdb_kubernetes-dashboard(1ea37ce9-add3-4e74-8ad8-d0f92b64296d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kccdb" podUID="1ea37ce9-add3-4e74-8ad8-d0f92b64296d"
	Oct 16 19:40:41 old-k8s-version-663330 kubelet[777]: I1016 19:40:41.263529     777 scope.go:117] "RemoveContainer" containerID="abcbcc8cfc239104405f51a62c429a4edaa979d3ef31d8e3d90a6217da3300ca"
	Oct 16 19:40:41 old-k8s-version-663330 kubelet[777]: E1016 19:40:41.263792     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kccdb_kubernetes-dashboard(1ea37ce9-add3-4e74-8ad8-d0f92b64296d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kccdb" podUID="1ea37ce9-add3-4e74-8ad8-d0f92b64296d"
	Oct 16 19:40:44 old-k8s-version-663330 kubelet[777]: I1016 19:40:44.418804     777 scope.go:117] "RemoveContainer" containerID="abcbcc8cfc239104405f51a62c429a4edaa979d3ef31d8e3d90a6217da3300ca"
	Oct 16 19:40:44 old-k8s-version-663330 kubelet[777]: E1016 19:40:44.419125     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kccdb_kubernetes-dashboard(1ea37ce9-add3-4e74-8ad8-d0f92b64296d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kccdb" podUID="1ea37ce9-add3-4e74-8ad8-d0f92b64296d"
	Oct 16 19:40:54 old-k8s-version-663330 kubelet[777]: I1016 19:40:54.296860     777 scope.go:117] "RemoveContainer" containerID="83d913eaf88e4605f8517296d5310c6a465cdae0f4f71ad50a666244e2417d90"
	Oct 16 19:40:54 old-k8s-version-663330 kubelet[777]: I1016 19:40:54.315942     777 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8z9qd" podStartSLOduration=11.313584829 podCreationTimestamp="2025-10-16 19:40:34 +0000 UTC" firstStartedPulling="2025-10-16 19:40:34.779794081 +0000 UTC m=+19.901452597" lastFinishedPulling="2025-10-16 19:40:43.777968758 +0000 UTC m=+28.899627283" observedRunningTime="2025-10-16 19:40:44.286670521 +0000 UTC m=+29.408329037" watchObservedRunningTime="2025-10-16 19:40:54.311759515 +0000 UTC m=+39.433418032"
	Oct 16 19:40:56 old-k8s-version-663330 kubelet[777]: I1016 19:40:56.070053     777 scope.go:117] "RemoveContainer" containerID="abcbcc8cfc239104405f51a62c429a4edaa979d3ef31d8e3d90a6217da3300ca"
	Oct 16 19:40:56 old-k8s-version-663330 kubelet[777]: I1016 19:40:56.305526     777 scope.go:117] "RemoveContainer" containerID="abcbcc8cfc239104405f51a62c429a4edaa979d3ef31d8e3d90a6217da3300ca"
	Oct 16 19:40:56 old-k8s-version-663330 kubelet[777]: I1016 19:40:56.306033     777 scope.go:117] "RemoveContainer" containerID="6c720d140a0e148385d07934721f94a02453ced5980c91dade254009be3878bb"
	Oct 16 19:40:56 old-k8s-version-663330 kubelet[777]: E1016 19:40:56.306295     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kccdb_kubernetes-dashboard(1ea37ce9-add3-4e74-8ad8-d0f92b64296d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kccdb" podUID="1ea37ce9-add3-4e74-8ad8-d0f92b64296d"
	Oct 16 19:41:04 old-k8s-version-663330 kubelet[777]: I1016 19:41:04.418505     777 scope.go:117] "RemoveContainer" containerID="6c720d140a0e148385d07934721f94a02453ced5980c91dade254009be3878bb"
	Oct 16 19:41:04 old-k8s-version-663330 kubelet[777]: E1016 19:41:04.418823     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kccdb_kubernetes-dashboard(1ea37ce9-add3-4e74-8ad8-d0f92b64296d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kccdb" podUID="1ea37ce9-add3-4e74-8ad8-d0f92b64296d"
	Oct 16 19:41:11 old-k8s-version-663330 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 19:41:11 old-k8s-version-663330 kubelet[777]: I1016 19:41:11.413931     777 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 16 19:41:11 old-k8s-version-663330 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 19:41:11 old-k8s-version-663330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [61125ed3a2c1c03d74ee146ef4ad2c1ade1a0a25c74fb8e34dd2f73b5e7b97bd] <==
	2025/10/16 19:40:43 Using namespace: kubernetes-dashboard
	2025/10/16 19:40:43 Using in-cluster config to connect to apiserver
	2025/10/16 19:40:43 Using secret token for csrf signing
	2025/10/16 19:40:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/16 19:40:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/16 19:40:43 Successful initial request to the apiserver, version: v1.28.0
	2025/10/16 19:40:43 Generating JWE encryption key
	2025/10/16 19:40:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/16 19:40:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/16 19:40:44 Initializing JWE encryption key from synchronized object
	2025/10/16 19:40:44 Creating in-cluster Sidecar client
	2025/10/16 19:40:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 19:40:44 Serving insecurely on HTTP port: 9090
	2025/10/16 19:41:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 19:40:43 Starting overwatch
	
	
	==> storage-provisioner [83d913eaf88e4605f8517296d5310c6a465cdae0f4f71ad50a666244e2417d90] <==
	I1016 19:40:23.347940       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1016 19:40:53.350573       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cd6d956ac80482a1f6bcfd367df30bb090e7bdac8f8122b9e06803726d3d4015] <==
	I1016 19:40:54.344305       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 19:40:54.358755       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 19:40:54.358870       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1016 19:41:11.769933       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 19:41:11.771288       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ea29456-c19e-483f-960c-d85113c7aa2e", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-663330_268cbf65-555f-4d91-9077-dfaf7c36db11 became leader
	I1016 19:41:11.771382       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-663330_268cbf65-555f-4d91-9077-dfaf7c36db11!
	I1016 19:41:11.880539       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-663330_268cbf65-555f-4d91-9077-dfaf7c36db11!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-663330 -n old-k8s-version-663330
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-663330 -n old-k8s-version-663330: exit status 2 (537.903852ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-663330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-663330
helpers_test.go:243: (dbg) docker inspect old-k8s-version-663330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178",
	        "Created": "2025-10-16T19:38:44.050016018Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470194,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T19:40:08.149462107Z",
	            "FinishedAt": "2025-10-16T19:40:07.284235749Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178/hostname",
	        "HostsPath": "/var/lib/docker/containers/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178/hosts",
	        "LogPath": "/var/lib/docker/containers/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178-json.log",
	        "Name": "/old-k8s-version-663330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-663330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-663330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178",
	                "LowerDir": "/var/lib/docker/overlay2/91ff1676dfb24263837902c7cf6d793de5cfeecee80400165619f3b3bc9dd706-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/91ff1676dfb24263837902c7cf6d793de5cfeecee80400165619f3b3bc9dd706/merged",
	                "UpperDir": "/var/lib/docker/overlay2/91ff1676dfb24263837902c7cf6d793de5cfeecee80400165619f3b3bc9dd706/diff",
	                "WorkDir": "/var/lib/docker/overlay2/91ff1676dfb24263837902c7cf6d793de5cfeecee80400165619f3b3bc9dd706/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-663330",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-663330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-663330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-663330",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-663330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4fcdf46f21f595b10589acd4d6d7e3d7296b723f51f0f9c6ecfd0deb97b97870",
	            "SandboxKey": "/var/run/docker/netns/4fcdf46f21f5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-663330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:d0:65:5e:8a:60",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "954005e57e721c33bcbbc4d582e61219818cba738fb17844a562d84e477b2115",
	                    "EndpointID": "6513cf01b463764416dc47985f669c2adbfb172bff19c98a3262642d83f56031",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-663330",
	                        "99b40d8e6d48"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-663330 -n old-k8s-version-663330
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-663330 -n old-k8s-version-663330: exit status 2 (493.817828ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-663330 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-663330 logs -n 25: (1.895623611s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-078761 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo containerd config dump                                                                                                                                                                                                  │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo crio config                                                                                                                                                                                                             │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ delete  │ -p cilium-078761                                                                                                                                                                                                                              │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │ 16 Oct 25 19:37 UTC │
	│ start   │ -p cert-expiration-828182 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-828182   │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │ 16 Oct 25 19:38 UTC │
	│ delete  │ -p force-systemd-env-871877                                                                                                                                                                                                                   │ force-systemd-env-871877 │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │ 16 Oct 25 19:37 UTC │
	│ start   │ -p cert-options-853056 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-853056      │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │ 16 Oct 25 19:38 UTC │
	│ ssh     │ cert-options-853056 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-853056      │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:38 UTC │
	│ ssh     │ -p cert-options-853056 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-853056      │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:38 UTC │
	│ delete  │ -p cert-options-853056                                                                                                                                                                                                                        │ cert-options-853056      │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:38 UTC │
	│ start   │ -p old-k8s-version-663330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:39 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-663330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:39 UTC │                     │
	│ stop    │ -p old-k8s-version-663330 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:39 UTC │ 16 Oct 25 19:40 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-663330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:40 UTC │ 16 Oct 25 19:40 UTC │
	│ start   │ -p old-k8s-version-663330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:40 UTC │ 16 Oct 25 19:40 UTC │
	│ start   │ -p cert-expiration-828182 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-828182   │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │                     │
	│ image   │ old-k8s-version-663330 image list --format=json                                                                                                                                                                                               │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-663330 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 19:41:01
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 19:41:01.063095  472198 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:41:01.063237  472198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:41:01.063242  472198 out.go:374] Setting ErrFile to fd 2...
	I1016 19:41:01.063246  472198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:41:01.063513  472198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:41:01.063908  472198 out.go:368] Setting JSON to false
	I1016 19:41:01.064894  472198 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8590,"bootTime":1760635071,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 19:41:01.064950  472198 start.go:141] virtualization:  
	I1016 19:41:01.068399  472198 out.go:179] * [cert-expiration-828182] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 19:41:01.072205  472198 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 19:41:01.072286  472198 notify.go:220] Checking for updates...
	I1016 19:41:01.075389  472198 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 19:41:01.078445  472198 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:41:01.081399  472198 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 19:41:01.084308  472198 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 19:41:01.087319  472198 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 19:41:01.091037  472198 config.go:182] Loaded profile config "cert-expiration-828182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:41:01.091684  472198 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 19:41:01.128102  472198 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 19:41:01.128214  472198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:41:01.191860  472198 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-16 19:41:01.181989018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:41:01.191957  472198 docker.go:318] overlay module found
	I1016 19:41:01.195247  472198 out.go:179] * Using the docker driver based on existing profile
	I1016 19:41:01.198432  472198 start.go:305] selected driver: docker
	I1016 19:41:01.198444  472198 start.go:925] validating driver "docker" against &{Name:cert-expiration-828182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-828182 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:41:01.198576  472198 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 19:41:01.199391  472198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:41:01.262447  472198 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-16 19:41:01.251396403 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:41:01.262807  472198 cni.go:84] Creating CNI manager for ""
	I1016 19:41:01.262869  472198 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:41:01.262910  472198 start.go:349] cluster config:
	{Name:cert-expiration-828182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-828182 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1016 19:41:01.266352  472198 out.go:179] * Starting "cert-expiration-828182" primary control-plane node in "cert-expiration-828182" cluster
	I1016 19:41:01.269104  472198 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 19:41:01.272080  472198 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 19:41:01.275172  472198 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:41:01.275243  472198 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 19:41:01.275256  472198 cache.go:58] Caching tarball of preloaded images
	I1016 19:41:01.275258  472198 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 19:41:01.275342  472198 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 19:41:01.275350  472198 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 19:41:01.275448  472198 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/config.json ...
	I1016 19:41:01.297233  472198 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 19:41:01.297245  472198 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 19:41:01.297257  472198 cache.go:232] Successfully downloaded all kic artifacts
	I1016 19:41:01.297278  472198 start.go:360] acquireMachinesLock for cert-expiration-828182: {Name:mke633a1e943d77132d294d88356b824676d1e34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:41:01.297337  472198 start.go:364] duration metric: took 40.189µs to acquireMachinesLock for "cert-expiration-828182"
	I1016 19:41:01.297355  472198 start.go:96] Skipping create...Using existing machine configuration
	I1016 19:41:01.297359  472198 fix.go:54] fixHost starting: 
	I1016 19:41:01.297608  472198 cli_runner.go:164] Run: docker container inspect cert-expiration-828182 --format={{.State.Status}}
	I1016 19:41:01.315587  472198 fix.go:112] recreateIfNeeded on cert-expiration-828182: state=Running err=<nil>
	W1016 19:41:01.315607  472198 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 19:41:01.318732  472198 out.go:252] * Updating the running docker "cert-expiration-828182" container ...
	I1016 19:41:01.318756  472198 machine.go:93] provisionDockerMachine start ...
	I1016 19:41:01.318845  472198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-828182
	I1016 19:41:01.343663  472198 main.go:141] libmachine: Using SSH client type: native
	I1016 19:41:01.344045  472198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1016 19:41:01.344053  472198 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 19:41:01.494048  472198 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-828182
	
	I1016 19:41:01.494070  472198 ubuntu.go:182] provisioning hostname "cert-expiration-828182"
	I1016 19:41:01.494136  472198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-828182
	I1016 19:41:01.513531  472198 main.go:141] libmachine: Using SSH client type: native
	I1016 19:41:01.513849  472198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1016 19:41:01.513858  472198 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-828182 && echo "cert-expiration-828182" | sudo tee /etc/hostname
	I1016 19:41:01.672805  472198 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-828182
	
	I1016 19:41:01.672888  472198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-828182
	I1016 19:41:01.692835  472198 main.go:141] libmachine: Using SSH client type: native
	I1016 19:41:01.693180  472198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1016 19:41:01.693196  472198 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-828182' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-828182/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-828182' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 19:41:01.842447  472198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 19:41:01.842473  472198 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 19:41:01.842497  472198 ubuntu.go:190] setting up certificates
	I1016 19:41:01.842513  472198 provision.go:84] configureAuth start
	I1016 19:41:01.842573  472198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-828182
	I1016 19:41:01.867168  472198 provision.go:143] copyHostCerts
	I1016 19:41:01.867228  472198 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 19:41:01.867242  472198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 19:41:01.867318  472198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 19:41:01.867433  472198 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 19:41:01.867437  472198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 19:41:01.867468  472198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 19:41:01.867529  472198 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 19:41:01.867532  472198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 19:41:01.867554  472198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 19:41:01.867607  472198 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-828182 san=[127.0.0.1 192.168.85.2 cert-expiration-828182 localhost minikube]
	I1016 19:41:02.480547  472198 provision.go:177] copyRemoteCerts
	I1016 19:41:02.480601  472198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 19:41:02.480649  472198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-828182
	I1016 19:41:02.503611  472198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/cert-expiration-828182/id_rsa Username:docker}
	I1016 19:41:02.610523  472198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 19:41:02.634081  472198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1016 19:41:02.653301  472198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 19:41:02.672629  472198 provision.go:87] duration metric: took 830.093571ms to configureAuth
	I1016 19:41:02.672658  472198 ubuntu.go:206] setting minikube options for container-runtime
	I1016 19:41:02.672856  472198 config.go:182] Loaded profile config "cert-expiration-828182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:41:02.672957  472198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-828182
	I1016 19:41:02.691522  472198 main.go:141] libmachine: Using SSH client type: native
	I1016 19:41:02.691852  472198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1016 19:41:02.691875  472198 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 19:41:08.048332  472198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 19:41:08.048345  472198 machine.go:96] duration metric: took 6.729582891s to provisionDockerMachine
	I1016 19:41:08.048354  472198 start.go:293] postStartSetup for "cert-expiration-828182" (driver="docker")
	I1016 19:41:08.048364  472198 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 19:41:08.048426  472198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 19:41:08.048476  472198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-828182
	I1016 19:41:08.067079  472198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/cert-expiration-828182/id_rsa Username:docker}
	I1016 19:41:08.169055  472198 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 19:41:08.172544  472198 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 19:41:08.172561  472198 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 19:41:08.172571  472198 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 19:41:08.172624  472198 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 19:41:08.172709  472198 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 19:41:08.172813  472198 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 19:41:08.183213  472198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:41:08.202837  472198 start.go:296] duration metric: took 154.309285ms for postStartSetup
	I1016 19:41:08.202923  472198 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 19:41:08.202994  472198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-828182
	I1016 19:41:08.232023  472198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/cert-expiration-828182/id_rsa Username:docker}
	I1016 19:41:08.338906  472198 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 19:41:08.345246  472198 fix.go:56] duration metric: took 7.047872175s for fixHost
	I1016 19:41:08.345260  472198 start.go:83] releasing machines lock for "cert-expiration-828182", held for 7.04791604s
	I1016 19:41:08.345334  472198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-828182
	I1016 19:41:08.363676  472198 ssh_runner.go:195] Run: cat /version.json
	I1016 19:41:08.363718  472198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-828182
	I1016 19:41:08.363721  472198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 19:41:08.363769  472198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-828182
	I1016 19:41:08.392183  472198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/cert-expiration-828182/id_rsa Username:docker}
	I1016 19:41:08.401249  472198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/cert-expiration-828182/id_rsa Username:docker}
	I1016 19:41:08.492985  472198 ssh_runner.go:195] Run: systemctl --version
	I1016 19:41:08.612400  472198 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 19:41:08.667930  472198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 19:41:08.672527  472198 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 19:41:08.672600  472198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 19:41:08.681759  472198 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 19:41:08.681772  472198 start.go:495] detecting cgroup driver to use...
	I1016 19:41:08.681804  472198 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 19:41:08.681850  472198 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 19:41:08.697474  472198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 19:41:08.711148  472198 docker.go:218] disabling cri-docker service (if available) ...
	I1016 19:41:08.711206  472198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 19:41:08.728898  472198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 19:41:08.743061  472198 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 19:41:08.893809  472198 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 19:41:09.053903  472198 docker.go:234] disabling docker service ...
	I1016 19:41:09.053973  472198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 19:41:09.069382  472198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 19:41:09.085630  472198 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 19:41:09.232872  472198 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 19:41:09.387062  472198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 19:41:09.400384  472198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 19:41:09.415293  472198 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 19:41:09.415370  472198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:09.425030  472198 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 19:41:09.425102  472198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:09.434679  472198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:09.443966  472198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:09.461367  472198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 19:41:09.471907  472198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:09.481606  472198 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:09.490210  472198 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:09.499438  472198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 19:41:09.507014  472198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 19:41:09.516450  472198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:41:09.664217  472198 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 19:41:09.852904  472198 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 19:41:09.852967  472198 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 19:41:09.857175  472198 start.go:563] Will wait 60s for crictl version
	I1016 19:41:09.857241  472198 ssh_runner.go:195] Run: which crictl
	I1016 19:41:09.861059  472198 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 19:41:09.892390  472198 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 19:41:09.892486  472198 ssh_runner.go:195] Run: crio --version
	I1016 19:41:09.921718  472198 ssh_runner.go:195] Run: crio --version
	I1016 19:41:09.962225  472198 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 19:41:09.965274  472198 cli_runner.go:164] Run: docker network inspect cert-expiration-828182 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:41:09.979965  472198 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1016 19:41:09.983979  472198 kubeadm.go:883] updating cluster {Name:cert-expiration-828182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-828182 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 19:41:09.984089  472198 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:41:09.984146  472198 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:41:10.031418  472198 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:41:10.031430  472198 crio.go:433] Images already preloaded, skipping extraction
	I1016 19:41:10.031495  472198 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:41:10.061742  472198 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:41:10.061755  472198 cache_images.go:85] Images are preloaded, skipping loading
	I1016 19:41:10.061761  472198 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1016 19:41:10.061919  472198 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-828182 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-828182 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 19:41:10.062009  472198 ssh_runner.go:195] Run: crio config
	I1016 19:41:10.125794  472198 cni.go:84] Creating CNI manager for ""
	I1016 19:41:10.125806  472198 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:41:10.125822  472198 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 19:41:10.125843  472198 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-828182 NodeName:cert-expiration-828182 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 19:41:10.126006  472198 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-828182"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 19:41:10.126074  472198 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 19:41:10.135760  472198 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 19:41:10.135832  472198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 19:41:10.143643  472198 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1016 19:41:10.156506  472198 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 19:41:10.169449  472198 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1016 19:41:10.184150  472198 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1016 19:41:10.188395  472198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:41:10.339871  472198 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:41:10.355586  472198 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182 for IP: 192.168.85.2
	I1016 19:41:10.355598  472198 certs.go:195] generating shared ca certs ...
	I1016 19:41:10.355612  472198 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:10.355758  472198 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 19:41:10.355795  472198 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 19:41:10.355801  472198 certs.go:257] generating profile certs ...
	W1016 19:41:10.355926  472198 out.go:285] ! Certificate client.crt has expired. Generating a new one...
	I1016 19:41:10.355950  472198 certs.go:624] cert expired /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/client.crt: expiration: 2025-10-16 19:40:35 +0000 UTC, now: 2025-10-16 19:41:10.355944687 +0000 UTC m=+9.339877465
	I1016 19:41:10.356055  472198 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/client.key
	I1016 19:41:10.356069  472198 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/client.crt with IP's: []
	I1016 19:41:10.727526  472198 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/client.crt ...
	I1016 19:41:10.727542  472198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/client.crt: {Name:mk018de438ee3946a3b6dcaf0ac6ccaeff1e56c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:10.727752  472198 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/client.key ...
	I1016 19:41:10.727761  472198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/client.key: {Name:mk88c9970afc913a3e219a838f242272de71563d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1016 19:41:10.727975  472198 out.go:285] ! Certificate apiserver.crt.f9319a21 has expired. Generating a new one...
	I1016 19:41:10.728060  472198 certs.go:624] cert expired /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/apiserver.crt.f9319a21: expiration: 2025-10-16 19:40:35 +0000 UTC, now: 2025-10-16 19:41:10.728052681 +0000 UTC m=+9.711985467
	I1016 19:41:10.728173  472198 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/apiserver.key.f9319a21
	I1016 19:41:10.728188  472198 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/apiserver.crt.f9319a21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1016 19:41:11.403965  472198 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/apiserver.crt.f9319a21 ...
	I1016 19:41:11.403981  472198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/apiserver.crt.f9319a21: {Name:mk88a242c6465d28fa7be9bd43cfeefb7434da18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:11.404387  472198 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/apiserver.key.f9319a21 ...
	I1016 19:41:11.404397  472198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/apiserver.key.f9319a21: {Name:mkf5c9a03a596aaf796c4bbe2a67385bf19ce67a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:11.404456  472198 certs.go:382] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/apiserver.crt.f9319a21 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/apiserver.crt
	I1016 19:41:11.404750  472198 certs.go:386] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/apiserver.key.f9319a21 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/apiserver.key
	W1016 19:41:11.404926  472198 out.go:285] ! Certificate proxy-client.crt has expired. Generating a new one...
	I1016 19:41:11.404958  472198 certs.go:624] cert expired /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/proxy-client.crt: expiration: 2025-10-16 19:40:36 +0000 UTC, now: 2025-10-16 19:41:11.404949203 +0000 UTC m=+10.388881981
	I1016 19:41:11.405967  472198 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/proxy-client.key
	I1016 19:41:11.405988  472198 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/proxy-client.crt with IP's: []
	I1016 19:41:13.129516  472198 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/proxy-client.crt ...
	I1016 19:41:13.129534  472198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/proxy-client.crt: {Name:mkdf331f3d7dd2a586d538f512fcfee94951a64a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:13.129706  472198 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/proxy-client.key ...
	I1016 19:41:13.129717  472198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/proxy-client.key: {Name:mk7b53caabc522bfcbf0a913ceb3fd48acb384a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:13.129982  472198 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 19:41:13.130031  472198 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 19:41:13.130039  472198 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 19:41:13.130062  472198 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 19:41:13.130084  472198 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 19:41:13.130114  472198 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 19:41:13.130160  472198 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:41:13.130835  472198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 19:41:13.151040  472198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 19:41:13.176886  472198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 19:41:13.201357  472198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 19:41:13.225166  472198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1016 19:41:13.247926  472198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1016 19:41:13.269740  472198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 19:41:13.315760  472198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/cert-expiration-828182/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1016 19:41:13.406501  472198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 19:41:13.483901  472198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 19:41:13.534107  472198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 19:41:13.588568  472198 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 19:41:13.682009  472198 ssh_runner.go:195] Run: openssl version
	I1016 19:41:13.714469  472198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 19:41:13.766242  472198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 19:41:13.775354  472198 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 19:41:13.775413  472198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 19:41:13.957741  472198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 19:41:13.972022  472198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 19:41:13.999657  472198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:41:14.023358  472198 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:41:14.023424  472198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:41:14.133980  472198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 19:41:14.148115  472198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 19:41:14.169580  472198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 19:41:14.175690  472198 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 19:41:14.175761  472198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 19:41:14.255862  472198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 19:41:14.267556  472198 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 19:41:14.288940  472198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 19:41:14.388720  472198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 19:41:14.461338  472198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 19:41:14.624648  472198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 19:41:14.671040  472198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 19:41:14.734075  472198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 19:41:14.788235  472198 kubeadm.go:400] StartCluster: {Name:cert-expiration-828182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-828182 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:41:14.788326  472198 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 19:41:14.788398  472198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 19:41:14.846102  472198 cri.go:89] found id: "380f8f98a15c506c0af6a0e2ed3d257ccf390fc60166c898a0c18824e93bc554"
	I1016 19:41:14.846114  472198 cri.go:89] found id: "fcb324ed05b489abb6109ed7c8be8ee66aad0879b697008c570730ee8f2cf3fc"
	I1016 19:41:14.846117  472198 cri.go:89] found id: "67fc3a0435fd21c9ad63a8e3f85d79825a1f74dd8cac1b1923761d1c4d550dbb"
	I1016 19:41:14.846120  472198 cri.go:89] found id: "029a1a92e4927b0eb112f27dd1cee1f5b869af4a4ae2b4a5b0a15ec145cae5c1"
	I1016 19:41:14.846123  472198 cri.go:89] found id: "5eda0e756e3afa4ed5c443208ad9809d2f41652ff35d3099401455fed5e8d3be"
	I1016 19:41:14.846126  472198 cri.go:89] found id: "1206043fa1df4181b16ec07082da00ae1eb3483bc74e82bf6a44be83d0372348"
	I1016 19:41:14.846129  472198 cri.go:89] found id: "d3321826cfeecd4c88982ceabf248706002455757426a1940c8f9b3189594f23"
	I1016 19:41:14.846131  472198 cri.go:89] found id: "e33025defb8e9089732c7d906f0bac4b629f421568a6a9bc7cf33a1803f28f74"
	I1016 19:41:14.846133  472198 cri.go:89] found id: "00c9c9fee9dffc7c87111480061250e77c9d235c5ec11c451a895527c1638696"
	I1016 19:41:14.846140  472198 cri.go:89] found id: "d0c2f4e94f536995a5f1b1edb41bf27026fd0e8895e3ee1c2873af7e0fefeffb"
	I1016 19:41:14.846142  472198 cri.go:89] found id: "b8244a4c7c5b83a32b82c20892124c29b7111c2504130094a149dbf16dbc5737"
	I1016 19:41:14.846145  472198 cri.go:89] found id: "3c88ad70407e04f2fe62ff180f9767b4df21e6cef8ed774fb2bbd7d2a3ea1c7e"
	I1016 19:41:14.846147  472198 cri.go:89] found id: "caeae161de89ab5be1d7fd63225de9dcf0fe11b2f6f0f747be02ac2e65bc7ca1"
	I1016 19:41:14.846149  472198 cri.go:89] found id: "0dbd87919e948c52ecc0998ac2aa9e007059721b4952b31e8b37c600edbbc0f7"
	I1016 19:41:14.846151  472198 cri.go:89] found id: "7ac65cf5bab2c5c0017beb67716f09869a210f95663567bb77d77f627818bc3f"
	I1016 19:41:14.846156  472198 cri.go:89] found id: "698e2a5b41d23fc4a7dd51d249f70b11db51565afcca8cd4c813b16f10756d6d"
	I1016 19:41:14.846158  472198 cri.go:89] found id: ""
	I1016 19:41:14.846210  472198 ssh_runner.go:195] Run: sudo runc list -f json
	W1016 19:41:14.864550  472198 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:41:14Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:41:14.864628  472198 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 19:41:14.878228  472198 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 19:41:14.878237  472198 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 19:41:14.878291  472198 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 19:41:14.890583  472198 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 19:41:14.891257  472198 kubeconfig.go:125] found "cert-expiration-828182" server: "https://192.168.85.2:8443"
	I1016 19:41:14.893074  472198 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 19:41:14.908080  472198 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1016 19:41:14.908103  472198 kubeadm.go:601] duration metric: took 29.861731ms to restartPrimaryControlPlane
	I1016 19:41:14.908114  472198 kubeadm.go:402] duration metric: took 119.88883ms to StartCluster
	I1016 19:41:14.908127  472198 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:14.908181  472198 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:41:14.909193  472198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:14.909401  472198 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:41:14.909735  472198 config.go:182] Loaded profile config "cert-expiration-828182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:41:14.909766  472198 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 19:41:14.909825  472198 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-828182"
	I1016 19:41:14.909836  472198 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-828182"
	W1016 19:41:14.909841  472198 addons.go:247] addon storage-provisioner should already be in state true
	I1016 19:41:14.909869  472198 host.go:66] Checking if "cert-expiration-828182" exists ...
	I1016 19:41:14.909969  472198 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-828182"
	I1016 19:41:14.909983  472198 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-828182"
	I1016 19:41:14.910303  472198 cli_runner.go:164] Run: docker container inspect cert-expiration-828182 --format={{.State.Status}}
	I1016 19:41:14.910523  472198 cli_runner.go:164] Run: docker container inspect cert-expiration-828182 --format={{.State.Status}}
	I1016 19:41:14.913372  472198 out.go:179] * Verifying Kubernetes components...
	I1016 19:41:14.924015  472198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:41:14.960452  472198 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 19:41:14.969801  472198 addons.go:238] Setting addon default-storageclass=true in "cert-expiration-828182"
	W1016 19:41:14.969813  472198 addons.go:247] addon default-storageclass should already be in state true
	I1016 19:41:14.969836  472198 host.go:66] Checking if "cert-expiration-828182" exists ...
	I1016 19:41:14.970266  472198 cli_runner.go:164] Run: docker container inspect cert-expiration-828182 --format={{.State.Status}}
	I1016 19:41:14.970446  472198 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:41:14.970453  472198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 19:41:14.970495  472198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-828182
	I1016 19:41:15.013541  472198 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 19:41:15.013555  472198 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 19:41:15.013636  472198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-828182
	I1016 19:41:15.015233  472198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/cert-expiration-828182/id_rsa Username:docker}
	I1016 19:41:15.054254  472198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/cert-expiration-828182/id_rsa Username:docker}
	I1016 19:41:15.245319  472198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:41:15.295013  472198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 19:41:15.452552  472198 ssh_runner.go:195] Run: sudo systemctl start kubelet
	
	
	==> CRI-O <==
	Oct 16 19:40:56 old-k8s-version-663330 crio[653]: time="2025-10-16T19:40:56.073179267Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:40:56 old-k8s-version-663330 crio[653]: time="2025-10-16T19:40:56.085756304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:40:56 old-k8s-version-663330 crio[653]: time="2025-10-16T19:40:56.086925758Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:40:56 old-k8s-version-663330 crio[653]: time="2025-10-16T19:40:56.105780191Z" level=info msg="Created container 6c720d140a0e148385d07934721f94a02453ced5980c91dade254009be3878bb: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kccdb/dashboard-metrics-scraper" id=8cec1c10-6f1f-4231-a4c8-a0e8bcf24021 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:40:56 old-k8s-version-663330 crio[653]: time="2025-10-16T19:40:56.107027709Z" level=info msg="Starting container: 6c720d140a0e148385d07934721f94a02453ced5980c91dade254009be3878bb" id=d1b1b804-b3df-4225-b36e-79218a8bc033 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:40:56 old-k8s-version-663330 crio[653]: time="2025-10-16T19:40:56.10905121Z" level=info msg="Started container" PID=1631 containerID=6c720d140a0e148385d07934721f94a02453ced5980c91dade254009be3878bb description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kccdb/dashboard-metrics-scraper id=d1b1b804-b3df-4225-b36e-79218a8bc033 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dec1fcaa6ee976c81b1d8b72a774d462e36e894bb06ae06a1270b77061964074
	Oct 16 19:40:56 old-k8s-version-663330 conmon[1629]: conmon 6c720d140a0e148385d0 <ninfo>: container 1631 exited with status 1
	Oct 16 19:40:56 old-k8s-version-663330 crio[653]: time="2025-10-16T19:40:56.307904448Z" level=info msg="Removing container: abcbcc8cfc239104405f51a62c429a4edaa979d3ef31d8e3d90a6217da3300ca" id=582367a4-0ff5-48a0-b195-c5a34108ded3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:40:56 old-k8s-version-663330 crio[653]: time="2025-10-16T19:40:56.322548161Z" level=info msg="Error loading conmon cgroup of container abcbcc8cfc239104405f51a62c429a4edaa979d3ef31d8e3d90a6217da3300ca: cgroup deleted" id=582367a4-0ff5-48a0-b195-c5a34108ded3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:40:56 old-k8s-version-663330 crio[653]: time="2025-10-16T19:40:56.328169642Z" level=info msg="Removed container abcbcc8cfc239104405f51a62c429a4edaa979d3ef31d8e3d90a6217da3300ca: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kccdb/dashboard-metrics-scraper" id=582367a4-0ff5-48a0-b195-c5a34108ded3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.615231122Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.620222345Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.62026123Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.620284582Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.623684144Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.623852334Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.6238892Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.627240212Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.62727565Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.627301291Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.630620245Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.630693715Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.630719307Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.634094015Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:41:03 old-k8s-version-663330 crio[653]: time="2025-10-16T19:41:03.634139645Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	6c720d140a0e1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   dec1fcaa6ee97       dashboard-metrics-scraper-5f989dc9cf-kccdb       kubernetes-dashboard
	cd6d956ac8048       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   98fb720208806       storage-provisioner                              kube-system
	61125ed3a2c1c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago       Running             kubernetes-dashboard        0                   1583d0cb0750e       kubernetes-dashboard-8694d4445c-8z9qd            kubernetes-dashboard
	9e2febbb05c3f       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           54 seconds ago       Running             coredns                     1                   3b13e8cd9dc8b       coredns-5dd5756b68-vqfrr                         kube-system
	83d913eaf88e4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   98fb720208806       storage-provisioner                              kube-system
	9c425ef1360ca       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           54 seconds ago       Running             kube-proxy                  1                   fc4b705d154b6       kube-proxy-7fvsr                                 kube-system
	f48070f990185       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   0e48fa61d13c6       kindnet-br5zb                                    kube-system
	80d9e9b9780ae       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   3a312851c8f83       busybox                                          default
	ca9f3c0351621       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   f9ee4acc650d0       kube-controller-manager-old-k8s-version-663330   kube-system
	eb71114d96533       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   f4d980f103e34       kube-scheduler-old-k8s-version-663330            kube-system
	e4d549de261d3       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   4f0020cc7dcd5       kube-apiserver-old-k8s-version-663330            kube-system
	d276f51870b23       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   0cbf91771546b       etcd-old-k8s-version-663330                      kube-system
	
	
	==> coredns [9e2febbb05c3f2f1d0b7636d3a32baf4c41042a9ed63eda2b1f9db102f12f8e7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41319 - 50623 "HINFO IN 7568889462730369034.6524799246419944909. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.046578537s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-663330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-663330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=old-k8s-version-663330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T19_39_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 19:39:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-663330
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:41:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:40:51 +0000   Thu, 16 Oct 2025 19:39:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:40:51 +0000   Thu, 16 Oct 2025 19:39:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:40:51 +0000   Thu, 16 Oct 2025 19:39:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 19:40:51 +0000   Thu, 16 Oct 2025 19:39:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-663330
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                1d0ae713-f566-4024-8f13-ca98591cb606
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-5dd5756b68-vqfrr                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     114s
	  kube-system                 etcd-old-k8s-version-663330                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m6s
	  kube-system                 kindnet-br5zb                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-old-k8s-version-663330             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-old-k8s-version-663330    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-7fvsr                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-old-k8s-version-663330             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-kccdb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-8z9qd             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 112s                   kube-proxy       
	  Normal  Starting                 54s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m15s (x8 over 2m15s)  kubelet          Node old-k8s-version-663330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m15s (x8 over 2m15s)  kubelet          Node old-k8s-version-663330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m15s (x8 over 2m15s)  kubelet          Node old-k8s-version-663330 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m6s                   kubelet          Node old-k8s-version-663330 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m6s                   kubelet          Node old-k8s-version-663330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s                   kubelet          Node old-k8s-version-663330 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m6s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           114s                   node-controller  Node old-k8s-version-663330 event: Registered Node old-k8s-version-663330 in Controller
	  Normal  NodeReady                99s                    kubelet          Node old-k8s-version-663330 status is now: NodeReady
	  Normal  Starting                 62s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node old-k8s-version-663330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node old-k8s-version-663330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node old-k8s-version-663330 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                    node-controller  Node old-k8s-version-663330 event: Registered Node old-k8s-version-663330 in Controller
	
	
	==> dmesg <==
	[Oct16 19:11] overlayfs: idmapped layers are currently not supported
	[Oct16 19:16] overlayfs: idmapped layers are currently not supported
	[ +33.922450] overlayfs: idmapped layers are currently not supported
	[Oct16 19:18] overlayfs: idmapped layers are currently not supported
	[Oct16 19:19] overlayfs: idmapped layers are currently not supported
	[Oct16 19:20] overlayfs: idmapped layers are currently not supported
	[Oct16 19:21] overlayfs: idmapped layers are currently not supported
	[Oct16 19:22] overlayfs: idmapped layers are currently not supported
	[  +5.025487] overlayfs: idmapped layers are currently not supported
	[Oct16 19:23] overlayfs: idmapped layers are currently not supported
	[ +28.397927] overlayfs: idmapped layers are currently not supported
	[Oct16 19:24] overlayfs: idmapped layers are currently not supported
	[ +25.533019] overlayfs: idmapped layers are currently not supported
	[Oct16 19:26] overlayfs: idmapped layers are currently not supported
	[Oct16 19:27] overlayfs: idmapped layers are currently not supported
	[Oct16 19:29] overlayfs: idmapped layers are currently not supported
	[Oct16 19:31] overlayfs: idmapped layers are currently not supported
	[Oct16 19:32] overlayfs: idmapped layers are currently not supported
	[Oct16 19:34] overlayfs: idmapped layers are currently not supported
	[Oct16 19:36] overlayfs: idmapped layers are currently not supported
	[Oct16 19:37] overlayfs: idmapped layers are currently not supported
	[  +8.490329] overlayfs: idmapped layers are currently not supported
	[Oct16 19:38] overlayfs: idmapped layers are currently not supported
	[Oct16 19:39] overlayfs: idmapped layers are currently not supported
	[Oct16 19:40] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d276f51870b2329c6a418b3790b4e14cdc49f20c8f1e281021038d55047a959f] <==
	{"level":"info","ts":"2025-10-16T19:40:15.806803Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-16T19:40:15.806814Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-16T19:40:15.807072Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-16T19:40:15.807128Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-16T19:40:15.807196Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-16T19:40:15.807221Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-16T19:40:15.810379Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-16T19:40:15.81176Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-16T19:40:15.811796Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-16T19:40:15.811846Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-16T19:40:15.811855Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-16T19:40:17.155169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-16T19:40:17.155251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-16T19:40:17.155287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-16T19:40:17.155311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-16T19:40:17.155319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-16T19:40:17.155332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-16T19:40:17.155347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-16T19:40:17.158521Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-16T19:40:17.160573Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-16T19:40:17.158487Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-663330 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-16T19:40:17.168432Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-16T19:40:17.168574Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-16T19:40:17.176208Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-16T19:40:17.186945Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:41:18 up  2:23,  0 user,  load average: 3.65, 3.37, 2.80
	Linux old-k8s-version-663330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f48070f990185c1ad93ea2494701be1ee1f88ad24465a040b14b88c4121179b2] <==
	I1016 19:40:23.409094       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 19:40:23.409553       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1016 19:40:23.409759       1 main.go:148] setting mtu 1500 for CNI 
	I1016 19:40:23.409805       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 19:40:23.409850       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T19:40:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 19:40:23.611376       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 19:40:23.611630       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 19:40:23.611743       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 19:40:23.611915       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1016 19:40:53.611479       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1016 19:40:53.611523       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1016 19:40:53.611646       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1016 19:40:53.611789       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1016 19:40:55.217223       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 19:40:55.217254       1 metrics.go:72] Registering metrics
	I1016 19:40:55.217324       1 controller.go:711] "Syncing nftables rules"
	I1016 19:41:03.614904       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:41:03.614972       1 main.go:301] handling current node
	I1016 19:41:13.617969       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:41:13.618008       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e4d549de261d3aa9d5926c279931f54e3030d5f24546a2b72e6f2aa811185db2] <==
	I1016 19:40:21.180703       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 19:40:21.196231       1 shared_informer.go:318] Caches are synced for configmaps
	I1016 19:40:21.196291       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1016 19:40:21.201917       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1016 19:40:21.201953       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1016 19:40:21.202063       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1016 19:40:21.202130       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1016 19:40:21.203323       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1016 19:40:21.205117       1 aggregator.go:166] initial CRD sync complete...
	I1016 19:40:21.205189       1 autoregister_controller.go:141] Starting autoregister controller
	I1016 19:40:21.205196       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 19:40:21.205204       1 cache.go:39] Caches are synced for autoregister controller
	I1016 19:40:21.246858       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1016 19:40:21.302701       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1016 19:40:21.693039       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 19:40:22.711121       1 controller.go:624] quota admission added evaluator for: namespaces
	I1016 19:40:22.758212       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1016 19:40:22.783713       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 19:40:22.795842       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 19:40:22.805855       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1016 19:40:22.867499       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.200.15"}
	I1016 19:40:22.885832       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.147.42"}
	I1016 19:40:33.687368       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 19:40:33.787647       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1016 19:40:33.991275       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ca9f3c035162136550e79b6c0014343408983870ed9af23ed59991d9a05e9e3e] <==
	I1016 19:40:33.795319       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1016 19:40:34.099489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="484.840463ms"
	I1016 19:40:34.099582       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="52.054µs"
	I1016 19:40:34.106663       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-kccdb"
	I1016 19:40:34.106704       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-8z9qd"
	I1016 19:40:34.123191       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="329.380313ms"
	I1016 19:40:34.123923       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="330.386845ms"
	I1016 19:40:34.124139       1 shared_informer.go:318] Caches are synced for garbage collector
	I1016 19:40:34.124177       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1016 19:40:34.133812       1 shared_informer.go:318] Caches are synced for garbage collector
	I1016 19:40:34.162170       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="38.150195ms"
	I1016 19:40:34.169968       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.649351ms"
	I1016 19:40:34.170085       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="74.282µs"
	I1016 19:40:34.173981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="86.393µs"
	I1016 19:40:34.182475       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="20.25475ms"
	I1016 19:40:34.182571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.413µs"
	I1016 19:40:39.270318       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.765µs"
	I1016 19:40:40.305613       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.638µs"
	I1016 19:40:41.285335       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="88.854µs"
	I1016 19:40:44.299728       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.313376ms"
	I1016 19:40:44.299827       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="56.722µs"
	I1016 19:40:56.324669       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.74µs"
	I1016 19:40:56.456528       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.616127ms"
	I1016 19:40:56.458297       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.547µs"
	I1016 19:41:04.433208       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.358µs"
	
	
	==> kube-proxy [9c425ef1360ca51d09afcd876feca2bc97e4e424b623de6bdcadcc122a937383] <==
	I1016 19:40:23.405707       1 server_others.go:69] "Using iptables proxy"
	I1016 19:40:23.426903       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1016 19:40:23.447178       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 19:40:23.450177       1 server_others.go:152] "Using iptables Proxier"
	I1016 19:40:23.450278       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1016 19:40:23.450310       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1016 19:40:23.450357       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1016 19:40:23.450582       1 server.go:846] "Version info" version="v1.28.0"
	I1016 19:40:23.450790       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:40:23.451494       1 config.go:188] "Starting service config controller"
	I1016 19:40:23.451573       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1016 19:40:23.451632       1 config.go:97] "Starting endpoint slice config controller"
	I1016 19:40:23.451670       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1016 19:40:23.452213       1 config.go:315] "Starting node config controller"
	I1016 19:40:23.452277       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1016 19:40:23.552282       1 shared_informer.go:318] Caches are synced for service config
	I1016 19:40:23.552292       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1016 19:40:23.552417       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [eb71114d965337f8d2433dfb6782af47091510173170df63ce2c629eed64d425] <==
	I1016 19:40:20.016669       1 serving.go:348] Generated self-signed cert in-memory
	I1016 19:40:21.289023       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1016 19:40:21.289060       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:40:21.342561       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1016 19:40:21.342656       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1016 19:40:21.342678       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1016 19:40:21.342698       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1016 19:40:21.363982       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:40:21.364941       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1016 19:40:21.364138       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:40:21.365943       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1016 19:40:21.445199       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1016 19:40:21.465267       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1016 19:40:21.466595       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Oct 16 19:40:34 old-k8s-version-663330 kubelet[777]: I1016 19:40:34.225995     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrzcg\" (UniqueName: \"kubernetes.io/projected/1ea37ce9-add3-4e74-8ad8-d0f92b64296d-kube-api-access-zrzcg\") pod \"dashboard-metrics-scraper-5f989dc9cf-kccdb\" (UID: \"1ea37ce9-add3-4e74-8ad8-d0f92b64296d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kccdb"
	Oct 16 19:40:34 old-k8s-version-663330 kubelet[777]: I1016 19:40:34.326675     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c01af607-d3e2-43d1-a893-02a2a8aabdeb-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-8z9qd\" (UID: \"c01af607-d3e2-43d1-a893-02a2a8aabdeb\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8z9qd"
	Oct 16 19:40:34 old-k8s-version-663330 kubelet[777]: I1016 19:40:34.326981     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msvwd\" (UniqueName: \"kubernetes.io/projected/c01af607-d3e2-43d1-a893-02a2a8aabdeb-kube-api-access-msvwd\") pod \"kubernetes-dashboard-8694d4445c-8z9qd\" (UID: \"c01af607-d3e2-43d1-a893-02a2a8aabdeb\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8z9qd"
	Oct 16 19:40:34 old-k8s-version-663330 kubelet[777]: W1016 19:40:34.452836     777 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178/crio-dec1fcaa6ee976c81b1d8b72a774d462e36e894bb06ae06a1270b77061964074 WatchSource:0}: Error finding container dec1fcaa6ee976c81b1d8b72a774d462e36e894bb06ae06a1270b77061964074: Status 404 returned error can't find the container with id dec1fcaa6ee976c81b1d8b72a774d462e36e894bb06ae06a1270b77061964074
	Oct 16 19:40:34 old-k8s-version-663330 kubelet[777]: W1016 19:40:34.772898     777 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/99b40d8e6d48a2971d29c9197014370167ba1ef8bbd32e84779adc3bc31f6178/crio-1583d0cb0750e83485df363d955c819bfe5c9042e5ab408383de7ae5f7136ae6 WatchSource:0}: Error finding container 1583d0cb0750e83485df363d955c819bfe5c9042e5ab408383de7ae5f7136ae6: Status 404 returned error can't find the container with id 1583d0cb0750e83485df363d955c819bfe5c9042e5ab408383de7ae5f7136ae6
	Oct 16 19:40:39 old-k8s-version-663330 kubelet[777]: I1016 19:40:39.252407     777 scope.go:117] "RemoveContainer" containerID="5d6a57949384ddbd2b95b6f33463826801f5004b52266b83584c53698bf81b70"
	Oct 16 19:40:40 old-k8s-version-663330 kubelet[777]: I1016 19:40:40.259001     777 scope.go:117] "RemoveContainer" containerID="5d6a57949384ddbd2b95b6f33463826801f5004b52266b83584c53698bf81b70"
	Oct 16 19:40:40 old-k8s-version-663330 kubelet[777]: I1016 19:40:40.259311     777 scope.go:117] "RemoveContainer" containerID="abcbcc8cfc239104405f51a62c429a4edaa979d3ef31d8e3d90a6217da3300ca"
	Oct 16 19:40:40 old-k8s-version-663330 kubelet[777]: E1016 19:40:40.260710     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kccdb_kubernetes-dashboard(1ea37ce9-add3-4e74-8ad8-d0f92b64296d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kccdb" podUID="1ea37ce9-add3-4e74-8ad8-d0f92b64296d"
	Oct 16 19:40:41 old-k8s-version-663330 kubelet[777]: I1016 19:40:41.263529     777 scope.go:117] "RemoveContainer" containerID="abcbcc8cfc239104405f51a62c429a4edaa979d3ef31d8e3d90a6217da3300ca"
	Oct 16 19:40:41 old-k8s-version-663330 kubelet[777]: E1016 19:40:41.263792     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kccdb_kubernetes-dashboard(1ea37ce9-add3-4e74-8ad8-d0f92b64296d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kccdb" podUID="1ea37ce9-add3-4e74-8ad8-d0f92b64296d"
	Oct 16 19:40:44 old-k8s-version-663330 kubelet[777]: I1016 19:40:44.418804     777 scope.go:117] "RemoveContainer" containerID="abcbcc8cfc239104405f51a62c429a4edaa979d3ef31d8e3d90a6217da3300ca"
	Oct 16 19:40:44 old-k8s-version-663330 kubelet[777]: E1016 19:40:44.419125     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kccdb_kubernetes-dashboard(1ea37ce9-add3-4e74-8ad8-d0f92b64296d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kccdb" podUID="1ea37ce9-add3-4e74-8ad8-d0f92b64296d"
	Oct 16 19:40:54 old-k8s-version-663330 kubelet[777]: I1016 19:40:54.296860     777 scope.go:117] "RemoveContainer" containerID="83d913eaf88e4605f8517296d5310c6a465cdae0f4f71ad50a666244e2417d90"
	Oct 16 19:40:54 old-k8s-version-663330 kubelet[777]: I1016 19:40:54.315942     777 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8z9qd" podStartSLOduration=11.313584829 podCreationTimestamp="2025-10-16 19:40:34 +0000 UTC" firstStartedPulling="2025-10-16 19:40:34.779794081 +0000 UTC m=+19.901452597" lastFinishedPulling="2025-10-16 19:40:43.777968758 +0000 UTC m=+28.899627283" observedRunningTime="2025-10-16 19:40:44.286670521 +0000 UTC m=+29.408329037" watchObservedRunningTime="2025-10-16 19:40:54.311759515 +0000 UTC m=+39.433418032"
	Oct 16 19:40:56 old-k8s-version-663330 kubelet[777]: I1016 19:40:56.070053     777 scope.go:117] "RemoveContainer" containerID="abcbcc8cfc239104405f51a62c429a4edaa979d3ef31d8e3d90a6217da3300ca"
	Oct 16 19:40:56 old-k8s-version-663330 kubelet[777]: I1016 19:40:56.305526     777 scope.go:117] "RemoveContainer" containerID="abcbcc8cfc239104405f51a62c429a4edaa979d3ef31d8e3d90a6217da3300ca"
	Oct 16 19:40:56 old-k8s-version-663330 kubelet[777]: I1016 19:40:56.306033     777 scope.go:117] "RemoveContainer" containerID="6c720d140a0e148385d07934721f94a02453ced5980c91dade254009be3878bb"
	Oct 16 19:40:56 old-k8s-version-663330 kubelet[777]: E1016 19:40:56.306295     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kccdb_kubernetes-dashboard(1ea37ce9-add3-4e74-8ad8-d0f92b64296d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kccdb" podUID="1ea37ce9-add3-4e74-8ad8-d0f92b64296d"
	Oct 16 19:41:04 old-k8s-version-663330 kubelet[777]: I1016 19:41:04.418505     777 scope.go:117] "RemoveContainer" containerID="6c720d140a0e148385d07934721f94a02453ced5980c91dade254009be3878bb"
	Oct 16 19:41:04 old-k8s-version-663330 kubelet[777]: E1016 19:41:04.418823     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kccdb_kubernetes-dashboard(1ea37ce9-add3-4e74-8ad8-d0f92b64296d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kccdb" podUID="1ea37ce9-add3-4e74-8ad8-d0f92b64296d"
	Oct 16 19:41:11 old-k8s-version-663330 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 19:41:11 old-k8s-version-663330 kubelet[777]: I1016 19:41:11.413931     777 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 16 19:41:11 old-k8s-version-663330 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 19:41:11 old-k8s-version-663330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [61125ed3a2c1c03d74ee146ef4ad2c1ade1a0a25c74fb8e34dd2f73b5e7b97bd] <==
	2025/10/16 19:40:43 Using namespace: kubernetes-dashboard
	2025/10/16 19:40:43 Using in-cluster config to connect to apiserver
	2025/10/16 19:40:43 Using secret token for csrf signing
	2025/10/16 19:40:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/16 19:40:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/16 19:40:43 Successful initial request to the apiserver, version: v1.28.0
	2025/10/16 19:40:43 Generating JWE encryption key
	2025/10/16 19:40:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/16 19:40:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/16 19:40:44 Initializing JWE encryption key from synchronized object
	2025/10/16 19:40:44 Creating in-cluster Sidecar client
	2025/10/16 19:40:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 19:40:44 Serving insecurely on HTTP port: 9090
	2025/10/16 19:41:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 19:40:43 Starting overwatch
	
	
	==> storage-provisioner [83d913eaf88e4605f8517296d5310c6a465cdae0f4f71ad50a666244e2417d90] <==
	I1016 19:40:23.347940       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1016 19:40:53.350573       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cd6d956ac80482a1f6bcfd367df30bb090e7bdac8f8122b9e06803726d3d4015] <==
	I1016 19:40:54.344305       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 19:40:54.358755       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 19:40:54.358870       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1016 19:41:11.769933       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 19:41:11.771288       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ea29456-c19e-483f-960c-d85113c7aa2e", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-663330_268cbf65-555f-4d91-9077-dfaf7c36db11 became leader
	I1016 19:41:11.771382       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-663330_268cbf65-555f-4d91-9077-dfaf7c36db11!
	I1016 19:41:11.880539       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-663330_268cbf65-555f-4d91-9077-dfaf7c36db11!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-663330 -n old-k8s-version-663330
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-663330 -n old-k8s-version-663330: exit status 2 (497.529449ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-663330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (8.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-225696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-225696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (289.183271ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:42:47Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-225696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-225696 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-225696 describe deploy/metrics-server -n kube-system: exit status 1 (80.922111ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-225696 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-225696
helpers_test.go:243: (dbg) docker inspect no-preload-225696:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a",
	        "Created": "2025-10-16T19:41:24.445990771Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 475286,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T19:41:24.901742441Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a/hostname",
	        "HostsPath": "/var/lib/docker/containers/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a/hosts",
	        "LogPath": "/var/lib/docker/containers/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a-json.log",
	        "Name": "/no-preload-225696",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-225696:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-225696",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a",
	                "LowerDir": "/var/lib/docker/overlay2/07a6d3c2127f7badb81b1849c80b08dc8506200efbd30f222dfd4c5a220091b0-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/07a6d3c2127f7badb81b1849c80b08dc8506200efbd30f222dfd4c5a220091b0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/07a6d3c2127f7badb81b1849c80b08dc8506200efbd30f222dfd4c5a220091b0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/07a6d3c2127f7badb81b1849c80b08dc8506200efbd30f222dfd4c5a220091b0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-225696",
	                "Source": "/var/lib/docker/volumes/no-preload-225696/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-225696",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-225696",
	                "name.minikube.sigs.k8s.io": "no-preload-225696",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8de73599a3115ac82e780c1e0932e4178c0965c1850edc696fb87b792821eb0f",
	            "SandboxKey": "/var/run/docker/netns/8de73599a311",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-225696": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:49:e7:24:db:a2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "39b67ad0eeb0bd39715bf4033d345a54f5da2b5672e2db285dbc6c4fed23f45e",
	                    "EndpointID": "ef8eb42e597f20cef2c2fe2655fc5363539ae7dda6c23887f940394f19d8f38d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-225696",
	                        "67fd0d064b81"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-225696 -n no-preload-225696
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-225696 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-225696 logs -n 25: (1.245826443s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-078761 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ ssh     │ -p cilium-078761 sudo crio config                                                                                                                                                                                                             │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │                     │
	│ delete  │ -p cilium-078761                                                                                                                                                                                                                              │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │ 16 Oct 25 19:37 UTC │
	│ start   │ -p cert-expiration-828182 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-828182   │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │ 16 Oct 25 19:38 UTC │
	│ delete  │ -p force-systemd-env-871877                                                                                                                                                                                                                   │ force-systemd-env-871877 │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │ 16 Oct 25 19:37 UTC │
	│ start   │ -p cert-options-853056 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-853056      │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │ 16 Oct 25 19:38 UTC │
	│ ssh     │ cert-options-853056 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-853056      │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:38 UTC │
	│ ssh     │ -p cert-options-853056 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-853056      │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:38 UTC │
	│ delete  │ -p cert-options-853056                                                                                                                                                                                                                        │ cert-options-853056      │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:38 UTC │
	│ start   │ -p old-k8s-version-663330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:39 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-663330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:39 UTC │                     │
	│ stop    │ -p old-k8s-version-663330 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:39 UTC │ 16 Oct 25 19:40 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-663330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:40 UTC │ 16 Oct 25 19:40 UTC │
	│ start   │ -p old-k8s-version-663330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:40 UTC │ 16 Oct 25 19:40 UTC │
	│ start   │ -p cert-expiration-828182 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-828182   │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ image   │ old-k8s-version-663330 image list --format=json                                                                                                                                                                                               │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-663330 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │                     │
	│ delete  │ -p old-k8s-version-663330                                                                                                                                                                                                                     │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ delete  │ -p cert-expiration-828182                                                                                                                                                                                                                     │ cert-expiration-828182   │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-663330                                                                                                                                                                                                                     │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-225696 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-225696        │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:42 UTC │
	│ start   │ -p embed-certs-751669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-751669       │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-225696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-225696        │ jenkins │ v1.37.0 │ 16 Oct 25 19:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 19:41:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 19:41:25.127973  475347 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:41:25.128279  475347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:41:25.128288  475347 out.go:374] Setting ErrFile to fd 2...
	I1016 19:41:25.128293  475347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:41:25.128571  475347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:41:25.129040  475347 out.go:368] Setting JSON to false
	I1016 19:41:25.130017  475347 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8615,"bootTime":1760635071,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 19:41:25.130092  475347 start.go:141] virtualization:  
	I1016 19:41:25.135243  475347 out.go:179] * [embed-certs-751669] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 19:41:25.138261  475347 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 19:41:25.138306  475347 notify.go:220] Checking for updates...
	I1016 19:41:25.141961  475347 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 19:41:25.144932  475347 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:41:25.148017  475347 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 19:41:25.151619  475347 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 19:41:25.153776  475347 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 19:41:25.157579  475347 config.go:182] Loaded profile config "no-preload-225696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:41:25.157682  475347 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 19:41:25.209678  475347 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 19:41:25.209949  475347 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:41:25.370008  475347 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-16 19:41:25.359514916 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:41:25.370112  475347 docker.go:318] overlay module found
	I1016 19:41:25.373946  475347 out.go:179] * Using the docker driver based on user configuration
	I1016 19:41:25.377039  475347 start.go:305] selected driver: docker
	I1016 19:41:25.377177  475347 start.go:925] validating driver "docker" against <nil>
	I1016 19:41:25.377225  475347 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 19:41:25.377905  475347 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:41:25.554807  475347 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-16 19:41:25.539083991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:41:25.554977  475347 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 19:41:25.555371  475347 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:41:25.565266  475347 out.go:179] * Using Docker driver with root privileges
	I1016 19:41:25.573272  475347 cni.go:84] Creating CNI manager for ""
	I1016 19:41:25.573362  475347 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:41:25.573380  475347 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1016 19:41:25.573480  475347 start.go:349] cluster config:
	{Name:embed-certs-751669 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-751669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:41:25.581249  475347 out.go:179] * Starting "embed-certs-751669" primary control-plane node in "embed-certs-751669" cluster
	I1016 19:41:25.589260  475347 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 19:41:25.592148  475347 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 19:41:25.595154  475347 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:41:25.595233  475347 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 19:41:25.595247  475347 cache.go:58] Caching tarball of preloaded images
	I1016 19:41:25.595270  475347 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 19:41:25.595342  475347 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 19:41:25.595352  475347 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 19:41:25.595485  475347 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/config.json ...
	I1016 19:41:25.595506  475347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/config.json: {Name:mk0d5145ebf2a04770cd733d3947a49de20318df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:25.648835  475347 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 19:41:25.648857  475347 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 19:41:25.648871  475347 cache.go:232] Successfully downloaded all kic artifacts
	I1016 19:41:25.648895  475347 start.go:360] acquireMachinesLock for embed-certs-751669: {Name:mkb92787bce004fe7aa2e02dbed85cdecf06ce4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:41:25.649000  475347 start.go:364] duration metric: took 91.266µs to acquireMachinesLock for "embed-certs-751669"
	I1016 19:41:25.649026  475347 start.go:93] Provisioning new machine with config: &{Name:embed-certs-751669 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-751669 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:41:25.649098  475347 start.go:125] createHost starting for "" (driver="docker")
	I1016 19:41:23.419192  474835 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1016 19:41:23.419517  474835 start.go:159] libmachine.API.Create for "no-preload-225696" (driver="docker")
	I1016 19:41:23.419565  474835 client.go:168] LocalClient.Create starting
	I1016 19:41:23.419657  474835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem
	I1016 19:41:23.419704  474835 main.go:141] libmachine: Decoding PEM data...
	I1016 19:41:23.419723  474835 main.go:141] libmachine: Parsing certificate...
	I1016 19:41:23.419794  474835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem
	I1016 19:41:23.419818  474835 main.go:141] libmachine: Decoding PEM data...
	I1016 19:41:23.419835  474835 main.go:141] libmachine: Parsing certificate...
	I1016 19:41:23.420273  474835 cli_runner.go:164] Run: docker network inspect no-preload-225696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1016 19:41:23.445782  474835 cli_runner.go:211] docker network inspect no-preload-225696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1016 19:41:23.445900  474835 network_create.go:284] running [docker network inspect no-preload-225696] to gather additional debugging logs...
	I1016 19:41:23.445924  474835 cli_runner.go:164] Run: docker network inspect no-preload-225696
	W1016 19:41:23.463353  474835 cli_runner.go:211] docker network inspect no-preload-225696 returned with exit code 1
	I1016 19:41:23.463386  474835 network_create.go:287] error running [docker network inspect no-preload-225696]: docker network inspect no-preload-225696: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-225696 not found
	I1016 19:41:23.463404  474835 network_create.go:289] output of [docker network inspect no-preload-225696]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-225696 not found
	
	** /stderr **
	I1016 19:41:23.463509  474835 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:41:23.481402  474835 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7adcf17f22ba IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:ab:9e:ea:f5:d5} reservation:<nil>}
	I1016 19:41:23.481893  474835 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbcb5241e782 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:58:26:d7:8f:45} reservation:<nil>}
	I1016 19:41:23.482166  474835 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-26579fafc836 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:48:af:83:92:ac} reservation:<nil>}
	I1016 19:41:23.482623  474835 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c072d0}
	I1016 19:41:23.482652  474835 network_create.go:124] attempt to create docker network no-preload-225696 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1016 19:41:23.482712  474835 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-225696 no-preload-225696
	I1016 19:41:23.564830  474835 network_create.go:108] docker network no-preload-225696 192.168.76.0/24 created
	I1016 19:41:23.564916  474835 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-225696" container
	I1016 19:41:23.565023  474835 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1016 19:41:23.583724  474835 cli_runner.go:164] Run: docker volume create no-preload-225696 --label name.minikube.sigs.k8s.io=no-preload-225696 --label created_by.minikube.sigs.k8s.io=true
	I1016 19:41:23.603150  474835 oci.go:103] Successfully created a docker volume no-preload-225696
	I1016 19:41:23.603227  474835 cli_runner.go:164] Run: docker run --rm --name no-preload-225696-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-225696 --entrypoint /usr/bin/test -v no-preload-225696:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1016 19:41:23.766046  474835 cache.go:162] opening:  /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1016 19:41:23.768925  474835 cache.go:162] opening:  /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1016 19:41:23.826483  474835 cache.go:162] opening:  /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1016 19:41:23.834426  474835 cache.go:162] opening:  /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1016 19:41:23.834796  474835 cache.go:162] opening:  /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1016 19:41:23.835627  474835 cache.go:162] opening:  /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1016 19:41:23.845359  474835 cache.go:162] opening:  /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1016 19:41:23.910480  474835 cache.go:157] /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1016 19:41:23.910506  474835 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 520.186485ms
	I1016 19:41:23.910517  474835 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1016 19:41:24.280591  474835 oci.go:107] Successfully prepared a docker volume no-preload-225696
	I1016 19:41:24.280676  474835 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1016 19:41:24.280826  474835 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1016 19:41:24.280970  474835 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1016 19:41:24.405390  474835 cache.go:157] /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1016 19:41:24.405416  474835 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 1.016153992s
	I1016 19:41:24.405439  474835 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1016 19:41:24.424136  474835 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-225696 --name no-preload-225696 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-225696 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-225696 --network no-preload-225696 --ip 192.168.76.2 --volume no-preload-225696:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1016 19:41:24.701480  474835 cache.go:157] /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1016 19:41:24.701516  474835 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.311577789s
	I1016 19:41:24.701530  474835 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1016 19:41:24.898027  474835 cache.go:157] /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1016 19:41:24.898104  474835 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.507105282s
	I1016 19:41:24.898132  474835 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1016 19:41:24.899970  474835 cache.go:157] /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1016 19:41:24.899999  474835 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.510453283s
	I1016 19:41:24.900012  474835 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1016 19:41:24.915676  474835 cache.go:157] /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1016 19:41:24.915706  474835 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.525970494s
	I1016 19:41:24.915728  474835 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1016 19:41:25.184575  474835 cli_runner.go:164] Run: docker container inspect no-preload-225696 --format={{.State.Running}}
	I1016 19:41:25.275633  474835 cli_runner.go:164] Run: docker container inspect no-preload-225696 --format={{.State.Status}}
	I1016 19:41:25.367873  474835 cli_runner.go:164] Run: docker exec no-preload-225696 stat /var/lib/dpkg/alternatives/iptables
	I1016 19:41:25.457762  474835 oci.go:144] the created container "no-preload-225696" has a running status.
	I1016 19:41:25.457789  474835 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/no-preload-225696/id_rsa...
	I1016 19:41:26.267494  474835 cache.go:157] /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1016 19:41:26.267526  474835 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.876885148s
	I1016 19:41:26.267538  474835 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1016 19:41:26.267549  474835 cache.go:87] Successfully saved all images to host disk.
	I1016 19:41:26.494958  474835 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21738-288457/.minikube/machines/no-preload-225696/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1016 19:41:26.515113  474835 cli_runner.go:164] Run: docker container inspect no-preload-225696 --format={{.State.Status}}
	I1016 19:41:26.541327  474835 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1016 19:41:26.541346  474835 kic_runner.go:114] Args: [docker exec --privileged no-preload-225696 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1016 19:41:26.589096  474835 cli_runner.go:164] Run: docker container inspect no-preload-225696 --format={{.State.Status}}
	I1016 19:41:26.623062  474835 machine.go:93] provisionDockerMachine start ...
	I1016 19:41:26.623163  474835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-225696
	I1016 19:41:26.642826  474835 main.go:141] libmachine: Using SSH client type: native
	I1016 19:41:26.643158  474835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1016 19:41:26.643169  474835 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 19:41:26.643933  474835 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 19:41:25.652698  475347 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1016 19:41:25.652934  475347 start.go:159] libmachine.API.Create for "embed-certs-751669" (driver="docker")
	I1016 19:41:25.652978  475347 client.go:168] LocalClient.Create starting
	I1016 19:41:25.653036  475347 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem
	I1016 19:41:25.653070  475347 main.go:141] libmachine: Decoding PEM data...
	I1016 19:41:25.653086  475347 main.go:141] libmachine: Parsing certificate...
	I1016 19:41:25.653234  475347 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem
	I1016 19:41:25.653258  475347 main.go:141] libmachine: Decoding PEM data...
	I1016 19:41:25.653269  475347 main.go:141] libmachine: Parsing certificate...
	I1016 19:41:25.653655  475347 cli_runner.go:164] Run: docker network inspect embed-certs-751669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1016 19:41:25.711787  475347 cli_runner.go:211] docker network inspect embed-certs-751669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1016 19:41:25.711869  475347 network_create.go:284] running [docker network inspect embed-certs-751669] to gather additional debugging logs...
	I1016 19:41:25.711888  475347 cli_runner.go:164] Run: docker network inspect embed-certs-751669
	W1016 19:41:25.816402  475347 cli_runner.go:211] docker network inspect embed-certs-751669 returned with exit code 1
	I1016 19:41:25.816432  475347 network_create.go:287] error running [docker network inspect embed-certs-751669]: docker network inspect embed-certs-751669: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-751669 not found
	I1016 19:41:25.816444  475347 network_create.go:289] output of [docker network inspect embed-certs-751669]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-751669 not found
	
	** /stderr **
	I1016 19:41:25.816556  475347 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:41:25.851113  475347 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7adcf17f22ba IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:ab:9e:ea:f5:d5} reservation:<nil>}
	I1016 19:41:25.851472  475347 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbcb5241e782 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:58:26:d7:8f:45} reservation:<nil>}
	I1016 19:41:25.851702  475347 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-26579fafc836 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:48:af:83:92:ac} reservation:<nil>}
	I1016 19:41:25.852012  475347 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-39b67ad0eeb0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:f4:5c:b8:12:49} reservation:<nil>}
	I1016 19:41:25.852428  475347 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a50db0}
	I1016 19:41:25.852445  475347 network_create.go:124] attempt to create docker network embed-certs-751669 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1016 19:41:25.852507  475347 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-751669 embed-certs-751669
	I1016 19:41:25.939699  475347 network_create.go:108] docker network embed-certs-751669 192.168.85.0/24 created
	I1016 19:41:25.939727  475347 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-751669" container
	I1016 19:41:25.939799  475347 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1016 19:41:25.957416  475347 cli_runner.go:164] Run: docker volume create embed-certs-751669 --label name.minikube.sigs.k8s.io=embed-certs-751669 --label created_by.minikube.sigs.k8s.io=true
	I1016 19:41:25.980130  475347 oci.go:103] Successfully created a docker volume embed-certs-751669
	I1016 19:41:25.980229  475347 cli_runner.go:164] Run: docker run --rm --name embed-certs-751669-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-751669 --entrypoint /usr/bin/test -v embed-certs-751669:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1016 19:41:26.679065  475347 oci.go:107] Successfully prepared a docker volume embed-certs-751669
	I1016 19:41:26.679123  475347 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:41:26.679143  475347 kic.go:194] Starting extracting preloaded images to volume ...
	I1016 19:41:26.679225  475347 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-751669:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1016 19:41:29.792775  474835 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-225696
	
	I1016 19:41:29.792801  474835 ubuntu.go:182] provisioning hostname "no-preload-225696"
	I1016 19:41:29.792869  474835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-225696
	I1016 19:41:29.814814  474835 main.go:141] libmachine: Using SSH client type: native
	I1016 19:41:29.815141  474835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1016 19:41:29.815158  474835 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-225696 && echo "no-preload-225696" | sudo tee /etc/hostname
	I1016 19:41:30.029048  474835 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-225696
	
	I1016 19:41:30.029180  474835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-225696
	I1016 19:41:30.085349  474835 main.go:141] libmachine: Using SSH client type: native
	I1016 19:41:30.085688  474835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1016 19:41:30.085725  474835 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-225696' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-225696/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-225696' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 19:41:30.241571  474835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 19:41:30.241601  474835 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 19:41:30.241622  474835 ubuntu.go:190] setting up certificates
	I1016 19:41:30.241632  474835 provision.go:84] configureAuth start
	I1016 19:41:30.241698  474835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-225696
	I1016 19:41:30.260256  474835 provision.go:143] copyHostCerts
	I1016 19:41:30.260325  474835 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 19:41:30.260344  474835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 19:41:30.260434  474835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 19:41:30.260534  474835 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 19:41:30.260546  474835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 19:41:30.260574  474835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 19:41:30.260631  474835 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 19:41:30.260639  474835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 19:41:30.260665  474835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 19:41:30.260720  474835 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.no-preload-225696 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-225696]
	I1016 19:41:30.800846  474835 provision.go:177] copyRemoteCerts
	I1016 19:41:30.800930  474835 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 19:41:30.800979  474835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-225696
	I1016 19:41:30.818115  474835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/no-preload-225696/id_rsa Username:docker}
	I1016 19:41:30.920932  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1016 19:41:30.938765  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 19:41:30.958147  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 19:41:30.976715  474835 provision.go:87] duration metric: took 735.06979ms to configureAuth
	I1016 19:41:30.976740  474835 ubuntu.go:206] setting minikube options for container-runtime
	I1016 19:41:30.976923  474835 config.go:182] Loaded profile config "no-preload-225696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:41:30.977027  474835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-225696
	I1016 19:41:30.994560  474835 main.go:141] libmachine: Using SSH client type: native
	I1016 19:41:30.994867  474835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1016 19:41:30.994891  474835 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 19:41:31.413569  474835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 19:41:31.413644  474835 machine.go:96] duration metric: took 4.790561088s to provisionDockerMachine
	I1016 19:41:31.413669  474835 client.go:171] duration metric: took 7.994091262s to LocalClient.Create
	I1016 19:41:31.413717  474835 start.go:167] duration metric: took 7.994208704s to libmachine.API.Create "no-preload-225696"
	I1016 19:41:31.413746  474835 start.go:293] postStartSetup for "no-preload-225696" (driver="docker")
	I1016 19:41:31.413775  474835 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 19:41:31.413902  474835 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 19:41:31.413980  474835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-225696
	I1016 19:41:31.432265  474835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/no-preload-225696/id_rsa Username:docker}
	I1016 19:41:31.562854  474835 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 19:41:31.567370  474835 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 19:41:31.567397  474835 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 19:41:31.567409  474835 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 19:41:31.567462  474835 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 19:41:31.567542  474835 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 19:41:31.567644  474835 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 19:41:31.582523  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:41:31.617058  474835 start.go:296] duration metric: took 203.279548ms for postStartSetup
	I1016 19:41:31.617471  474835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-225696
	I1016 19:41:31.638786  474835 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/config.json ...
	I1016 19:41:31.639071  474835 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 19:41:31.639116  474835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-225696
	I1016 19:41:31.663051  474835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/no-preload-225696/id_rsa Username:docker}
	I1016 19:41:31.798480  474835 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 19:41:31.805122  474835 start.go:128] duration metric: took 8.389589769s to createHost
	I1016 19:41:31.805211  474835 start.go:83] releasing machines lock for "no-preload-225696", held for 8.389789204s
	I1016 19:41:31.805301  474835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-225696
	I1016 19:41:31.827048  474835 ssh_runner.go:195] Run: cat /version.json
	I1016 19:41:31.827109  474835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-225696
	I1016 19:41:31.827350  474835 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 19:41:31.827407  474835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-225696
	I1016 19:41:31.861162  474835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/no-preload-225696/id_rsa Username:docker}
	I1016 19:41:31.878459  474835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/no-preload-225696/id_rsa Username:docker}
	I1016 19:41:32.148545  474835 ssh_runner.go:195] Run: systemctl --version
	I1016 19:41:32.156004  474835 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 19:41:32.264982  474835 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 19:41:32.273038  474835 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 19:41:32.273109  474835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 19:41:32.331523  474835 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1016 19:41:32.331549  474835 start.go:495] detecting cgroup driver to use...
	I1016 19:41:32.331583  474835 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 19:41:32.331640  474835 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 19:41:32.369615  474835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 19:41:32.394099  474835 docker.go:218] disabling cri-docker service (if available) ...
	I1016 19:41:32.394158  474835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 19:41:32.421630  474835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 19:41:32.446954  474835 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 19:41:32.758834  474835 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 19:41:32.969673  474835 docker.go:234] disabling docker service ...
	I1016 19:41:32.969743  474835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 19:41:32.997390  474835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 19:41:33.012718  474835 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 19:41:33.139582  474835 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 19:41:33.269221  474835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 19:41:33.281922  474835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 19:41:33.295362  474835 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 19:41:33.295435  474835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:33.304998  474835 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 19:41:33.305114  474835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:33.314690  474835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:33.323342  474835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:33.332108  474835 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 19:41:33.341352  474835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:33.352002  474835 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:33.366786  474835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:33.375298  474835 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 19:41:33.382836  474835 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 19:41:33.390411  474835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:41:33.507864  474835 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 19:41:33.681570  474835 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 19:41:33.681635  474835 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 19:41:33.688105  474835 start.go:563] Will wait 60s for crictl version
	I1016 19:41:33.688170  474835 ssh_runner.go:195] Run: which crictl
	I1016 19:41:33.692908  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 19:41:33.746827  474835 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 19:41:33.746916  474835 ssh_runner.go:195] Run: crio --version
	I1016 19:41:33.796870  474835 ssh_runner.go:195] Run: crio --version
	I1016 19:41:33.846391  474835 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 19:41:31.444168  475347 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-751669:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.764885911s)
	I1016 19:41:31.444211  475347 kic.go:203] duration metric: took 4.765052526s to extract preloaded images to volume ...
	W1016 19:41:31.444341  475347 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1016 19:41:31.444451  475347 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1016 19:41:31.520193  475347 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-751669 --name embed-certs-751669 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-751669 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-751669 --network embed-certs-751669 --ip 192.168.85.2 --volume embed-certs-751669:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1016 19:41:31.935692  475347 cli_runner.go:164] Run: docker container inspect embed-certs-751669 --format={{.State.Running}}
	I1016 19:41:31.959218  475347 cli_runner.go:164] Run: docker container inspect embed-certs-751669 --format={{.State.Status}}
	I1016 19:41:31.983385  475347 cli_runner.go:164] Run: docker exec embed-certs-751669 stat /var/lib/dpkg/alternatives/iptables
	I1016 19:41:32.058356  475347 oci.go:144] the created container "embed-certs-751669" has a running status.
	I1016 19:41:32.058399  475347 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa...
	I1016 19:41:32.769122  475347 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1016 19:41:32.802191  475347 cli_runner.go:164] Run: docker container inspect embed-certs-751669 --format={{.State.Status}}
	I1016 19:41:32.821748  475347 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1016 19:41:32.821780  475347 kic_runner.go:114] Args: [docker exec --privileged embed-certs-751669 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1016 19:41:32.890178  475347 cli_runner.go:164] Run: docker container inspect embed-certs-751669 --format={{.State.Status}}
	I1016 19:41:32.919684  475347 machine.go:93] provisionDockerMachine start ...
	I1016 19:41:32.919776  475347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:41:32.941446  475347 main.go:141] libmachine: Using SSH client type: native
	I1016 19:41:32.941883  475347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1016 19:41:32.941902  475347 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 19:41:32.942788  475347 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 19:41:33.849677  474835 cli_runner.go:164] Run: docker network inspect no-preload-225696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:41:33.865542  474835 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1016 19:41:33.869722  474835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:41:33.881236  474835 kubeadm.go:883] updating cluster {Name:no-preload-225696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-225696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 19:41:33.881346  474835 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:41:33.881389  474835 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:41:33.917051  474835 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1016 19:41:33.917072  474835 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1016 19:41:33.917107  474835 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 19:41:33.917333  474835 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1016 19:41:33.917425  474835 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1016 19:41:33.917511  474835 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1016 19:41:33.917604  474835 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1016 19:41:33.917687  474835 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1016 19:41:33.917762  474835 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1016 19:41:33.917869  474835 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1016 19:41:33.920250  474835 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1016 19:41:33.920475  474835 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1016 19:41:33.920604  474835 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1016 19:41:33.920725  474835 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1016 19:41:33.920839  474835 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 19:41:33.921109  474835 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1016 19:41:33.921386  474835 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1016 19:41:33.921469  474835 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1016 19:41:34.191320  474835 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1016 19:41:34.234465  474835 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1016 19:41:34.234505  474835 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1016 19:41:34.234573  474835 ssh_runner.go:195] Run: which crictl
	I1016 19:41:34.238603  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1016 19:41:34.256747  474835 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1016 19:41:34.267212  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1016 19:41:34.271990  474835 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1016 19:41:34.286984  474835 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1016 19:41:34.301362  474835 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1016 19:41:34.302464  474835 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1016 19:41:34.312274  474835 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1016 19:41:34.312313  474835 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1016 19:41:34.312363  474835 ssh_runner.go:195] Run: which crictl
	I1016 19:41:34.341535  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1016 19:41:34.353994  474835 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1016 19:41:34.354032  474835 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1016 19:41:34.354080  474835 ssh_runner.go:195] Run: which crictl
	I1016 19:41:34.357923  474835 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1016 19:41:34.380251  474835 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1016 19:41:34.380292  474835 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1016 19:41:34.380341  474835 ssh_runner.go:195] Run: which crictl
	I1016 19:41:34.435762  474835 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1016 19:41:34.435849  474835 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1016 19:41:34.435927  474835 ssh_runner.go:195] Run: which crictl
	I1016 19:41:34.436106  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1016 19:41:34.436049  474835 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1016 19:41:34.436415  474835 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1016 19:41:34.436502  474835 ssh_runner.go:195] Run: which crictl
	I1016 19:41:34.436545  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1016 19:41:34.436259  474835 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1016 19:41:34.436678  474835 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1016 19:41:34.436742  474835 ssh_runner.go:195] Run: which crictl
	I1016 19:41:34.436305  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1016 19:41:34.436184  474835 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1016 19:41:34.436949  474835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1016 19:41:34.446276  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1016 19:41:34.501341  474835 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1016 19:41:34.501418  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1016 19:41:34.501499  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1016 19:41:34.501543  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1016 19:41:34.501419  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1016 19:41:34.501636  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1016 19:41:34.501672  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1016 19:41:34.501739  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1016 19:41:34.624568  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1016 19:41:34.624637  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1016 19:41:34.624687  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1016 19:41:34.624725  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1016 19:41:34.624762  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1016 19:41:34.624797  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1016 19:41:34.750160  474835 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1016 19:41:34.750338  474835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1016 19:41:34.750449  474835 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1016 19:41:34.750538  474835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1016 19:41:34.750623  474835 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1016 19:41:34.750786  474835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1016 19:41:34.750907  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1016 19:41:34.751042  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1016 19:41:34.751062  474835 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1016 19:41:34.751278  474835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1016 19:41:34.813172  474835 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1016 19:41:34.813246  474835 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1016 19:41:34.813276  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1016 19:41:34.813277  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1016 19:41:34.813389  474835 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1016 19:41:34.813538  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1016 19:41:34.813444  474835 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1016 19:41:34.813676  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1016 19:41:34.813475  474835 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1016 19:41:34.813518  474835 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1016 19:41:34.814326  474835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1016 19:41:34.814413  474835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1016 19:41:34.863179  474835 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1016 19:41:34.863217  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1016 19:41:34.863267  474835 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1016 19:41:34.863284  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1016 19:41:34.882283  474835 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1016 19:41:34.882360  474835 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1016 19:41:35.196175  474835 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1016 19:41:35.196473  474835 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 19:41:35.269830  474835 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1016 19:41:35.280496  474835 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1016 19:41:35.280660  474835 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1016 19:41:35.341945  474835 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1016 19:41:35.342048  474835 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 19:41:35.342138  474835 ssh_runner.go:195] Run: which crictl
	I1016 19:41:37.420088  474835 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.139378406s)
	I1016 19:41:37.420119  474835 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1016 19:41:37.420137  474835 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1016 19:41:37.420183  474835 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1016 19:41:37.420257  474835 ssh_runner.go:235] Completed: which crictl: (2.078091584s)
	I1016 19:41:37.420289  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 19:41:36.097010  475347 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-751669
	
	I1016 19:41:36.097037  475347 ubuntu.go:182] provisioning hostname "embed-certs-751669"
	I1016 19:41:36.097104  475347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:41:36.125751  475347 main.go:141] libmachine: Using SSH client type: native
	I1016 19:41:36.126071  475347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1016 19:41:36.126088  475347 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-751669 && echo "embed-certs-751669" | sudo tee /etc/hostname
	I1016 19:41:36.295610  475347 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-751669
	
	I1016 19:41:36.295761  475347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:41:36.318693  475347 main.go:141] libmachine: Using SSH client type: native
	I1016 19:41:36.319016  475347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1016 19:41:36.319033  475347 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-751669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-751669/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-751669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 19:41:36.473609  475347 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 19:41:36.473684  475347 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 19:41:36.473722  475347 ubuntu.go:190] setting up certificates
	I1016 19:41:36.473770  475347 provision.go:84] configureAuth start
	I1016 19:41:36.473873  475347 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-751669
	I1016 19:41:36.497347  475347 provision.go:143] copyHostCerts
	I1016 19:41:36.497407  475347 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 19:41:36.497416  475347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 19:41:36.497485  475347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 19:41:36.497564  475347 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 19:41:36.497569  475347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 19:41:36.497594  475347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 19:41:36.497641  475347 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 19:41:36.497645  475347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 19:41:36.497667  475347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 19:41:36.497710  475347 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.embed-certs-751669 san=[127.0.0.1 192.168.85.2 embed-certs-751669 localhost minikube]
	I1016 19:41:36.853714  475347 provision.go:177] copyRemoteCerts
	I1016 19:41:36.853797  475347 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 19:41:36.853845  475347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:41:36.872814  475347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:41:36.978904  475347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1016 19:41:37.000918  475347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 19:41:37.023949  475347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 19:41:37.051171  475347 provision.go:87] duration metric: took 577.351309ms to configureAuth
	I1016 19:41:37.051203  475347 ubuntu.go:206] setting minikube options for container-runtime
	I1016 19:41:37.051386  475347 config.go:182] Loaded profile config "embed-certs-751669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:41:37.051488  475347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:41:37.070889  475347 main.go:141] libmachine: Using SSH client type: native
	I1016 19:41:37.071319  475347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1016 19:41:37.071342  475347 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 19:41:37.355697  475347 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 19:41:37.355769  475347 machine.go:96] duration metric: took 4.436060805s to provisionDockerMachine
	I1016 19:41:37.355813  475347 client.go:171] duration metric: took 11.702828433s to LocalClient.Create
	I1016 19:41:37.355860  475347 start.go:167] duration metric: took 11.702927338s to libmachine.API.Create "embed-certs-751669"
	I1016 19:41:37.355885  475347 start.go:293] postStartSetup for "embed-certs-751669" (driver="docker")
	I1016 19:41:37.355930  475347 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 19:41:37.356032  475347 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 19:41:37.356110  475347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:41:37.374815  475347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:41:37.482512  475347 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 19:41:37.487156  475347 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 19:41:37.487185  475347 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 19:41:37.487195  475347 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 19:41:37.487253  475347 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 19:41:37.487336  475347 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 19:41:37.487436  475347 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 19:41:37.496704  475347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:41:37.517011  475347 start.go:296] duration metric: took 161.077853ms for postStartSetup
	I1016 19:41:37.517467  475347 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-751669
	I1016 19:41:37.536947  475347 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/config.json ...
	I1016 19:41:37.537320  475347 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 19:41:37.537368  475347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:41:37.555655  475347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:41:37.663366  475347 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 19:41:37.668989  475347 start.go:128] duration metric: took 12.019873471s to createHost
	I1016 19:41:37.669057  475347 start.go:83] releasing machines lock for "embed-certs-751669", held for 12.020047569s
	I1016 19:41:37.669184  475347 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-751669
	I1016 19:41:37.688084  475347 ssh_runner.go:195] Run: cat /version.json
	I1016 19:41:37.688136  475347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:41:37.688355  475347 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 19:41:37.688410  475347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:41:37.717019  475347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:41:37.737713  475347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:41:37.919785  475347 ssh_runner.go:195] Run: systemctl --version
	I1016 19:41:37.926630  475347 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 19:41:37.971089  475347 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 19:41:37.976033  475347 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 19:41:37.976175  475347 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 19:41:38.007232  475347 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1016 19:41:38.007311  475347 start.go:495] detecting cgroup driver to use...
	I1016 19:41:38.007380  475347 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 19:41:38.007476  475347 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 19:41:38.031450  475347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 19:41:38.049164  475347 docker.go:218] disabling cri-docker service (if available) ...
	I1016 19:41:38.049287  475347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 19:41:38.078797  475347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 19:41:38.105308  475347 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 19:41:38.306223  475347 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 19:41:38.522185  475347 docker.go:234] disabling docker service ...
	I1016 19:41:38.522251  475347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 19:41:38.565532  475347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 19:41:38.581519  475347 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 19:41:38.774514  475347 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 19:41:38.955338  475347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 19:41:38.969443  475347 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 19:41:38.987858  475347 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 19:41:38.987943  475347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:38.999785  475347 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 19:41:38.999867  475347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:39.011396  475347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:39.024106  475347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:39.036152  475347 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 19:41:39.049462  475347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:39.058275  475347 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:39.074332  475347 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:41:39.086232  475347 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 19:41:39.096621  475347 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 19:41:39.105535  475347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:41:39.253226  475347 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 19:41:39.978491  475347 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 19:41:39.978629  475347 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 19:41:39.983322  475347 start.go:563] Will wait 60s for crictl version
	I1016 19:41:39.983437  475347 ssh_runner.go:195] Run: which crictl
	I1016 19:41:39.987777  475347 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 19:41:40.024000  475347 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 19:41:40.024188  475347 ssh_runner.go:195] Run: crio --version
	I1016 19:41:40.065279  475347 ssh_runner.go:195] Run: crio --version
	I1016 19:41:40.106464  475347 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 19:41:40.109316  475347 cli_runner.go:164] Run: docker network inspect embed-certs-751669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:41:40.127001  475347 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1016 19:41:40.132250  475347 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:41:40.145034  475347 kubeadm.go:883] updating cluster {Name:embed-certs-751669 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-751669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 19:41:40.145167  475347 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:41:40.145251  475347 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:41:40.207697  475347 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:41:40.207734  475347 crio.go:433] Images already preloaded, skipping extraction
	I1016 19:41:40.207797  475347 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:41:40.239464  475347 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:41:40.239493  475347 cache_images.go:85] Images are preloaded, skipping loading
	I1016 19:41:40.239501  475347 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1016 19:41:40.239643  475347 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-751669 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-751669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 19:41:40.239790  475347 ssh_runner.go:195] Run: crio config
	I1016 19:41:40.341588  475347 cni.go:84] Creating CNI manager for ""
	I1016 19:41:40.341625  475347 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:41:40.341640  475347 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 19:41:40.341663  475347 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-751669 NodeName:embed-certs-751669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 19:41:40.341825  475347 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-751669"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 19:41:40.341912  475347 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 19:41:40.351116  475347 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 19:41:40.351204  475347 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 19:41:40.359869  475347 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1016 19:41:40.379622  475347 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 19:41:40.396419  475347 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1016 19:41:40.414020  475347 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1016 19:41:40.418364  475347 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:41:40.430072  475347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:41:40.630953  475347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:41:40.653724  475347 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669 for IP: 192.168.85.2
	I1016 19:41:40.653764  475347 certs.go:195] generating shared ca certs ...
	I1016 19:41:40.653789  475347 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:40.653953  475347 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 19:41:40.654035  475347 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 19:41:40.654048  475347 certs.go:257] generating profile certs ...
	I1016 19:41:40.654130  475347 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/client.key
	I1016 19:41:40.654163  475347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/client.crt with IP's: []
	I1016 19:41:41.050886  475347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/client.crt ...
	I1016 19:41:41.050930  475347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/client.crt: {Name:mka0f34576687abf0c1ae0b88a627d89527d92f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:41.051236  475347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/client.key ...
	I1016 19:41:41.051254  475347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/client.key: {Name:mkd7bc565fe47870e077c24667b398fa9764b0fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:41.051404  475347 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/apiserver.key.98c460c4
	I1016 19:41:41.051432  475347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/apiserver.crt.98c460c4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1016 19:41:41.714259  475347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/apiserver.crt.98c460c4 ...
	I1016 19:41:41.714293  475347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/apiserver.crt.98c460c4: {Name:mkf89483e77e44e590ef151d139c0dad2a6757fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:41.714557  475347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/apiserver.key.98c460c4 ...
	I1016 19:41:41.714576  475347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/apiserver.key.98c460c4: {Name:mkf390097713e24eb47f68dcc52a553282310698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:41.714750  475347 certs.go:382] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/apiserver.crt.98c460c4 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/apiserver.crt
	I1016 19:41:41.714890  475347 certs.go:386] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/apiserver.key.98c460c4 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/apiserver.key
	I1016 19:41:41.714980  475347 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/proxy-client.key
	I1016 19:41:41.715006  475347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/proxy-client.crt with IP's: []
	I1016 19:41:42.253704  475347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/proxy-client.crt ...
	I1016 19:41:42.253744  475347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/proxy-client.crt: {Name:mke41143da978620abea02537050e997b8aa3d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:42.253939  475347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/proxy-client.key ...
	I1016 19:41:42.253958  475347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/proxy-client.key: {Name:mk46b31d89e069f3e8f5444b37b19796375a5519 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:42.254147  475347 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 19:41:42.254192  475347 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 19:41:42.254208  475347 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 19:41:42.254233  475347 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 19:41:42.254260  475347 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 19:41:42.254285  475347 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 19:41:42.254334  475347 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:41:42.254953  475347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 19:41:42.280377  475347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 19:41:42.305747  475347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 19:41:42.328496  475347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 19:41:42.350678  475347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1016 19:41:42.368516  475347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1016 19:41:42.388200  475347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 19:41:42.408314  475347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 19:41:42.427585  475347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 19:41:42.447575  475347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 19:41:42.467940  475347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 19:41:42.487929  475347 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 19:41:42.502418  475347 ssh_runner.go:195] Run: openssl version
	I1016 19:41:42.509014  475347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 19:41:42.518238  475347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 19:41:42.522493  475347 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 19:41:42.522615  475347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 19:41:42.564960  475347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 19:41:42.574325  475347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 19:41:42.583564  475347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 19:41:42.587893  475347 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 19:41:42.588011  475347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 19:41:42.629879  475347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 19:41:42.639171  475347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 19:41:42.648169  475347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:41:42.652347  475347 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:41:42.652469  475347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:41:42.713050  475347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 19:41:42.722321  475347 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 19:41:42.726727  475347 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1016 19:41:42.726827  475347 kubeadm.go:400] StartCluster: {Name:embed-certs-751669 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-751669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:41:42.726959  475347 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 19:41:42.727046  475347 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 19:41:42.782877  475347 cri.go:89] found id: ""
	I1016 19:41:42.782991  475347 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 19:41:42.793461  475347 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 19:41:42.802554  475347 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1016 19:41:42.802666  475347 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 19:41:42.813342  475347 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1016 19:41:42.813418  475347 kubeadm.go:157] found existing configuration files:
	
	I1016 19:41:42.813504  475347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1016 19:41:42.822635  475347 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1016 19:41:42.822751  475347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1016 19:41:42.830671  475347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1016 19:41:42.839597  475347 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1016 19:41:42.839741  475347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 19:41:42.847827  475347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1016 19:41:42.856924  475347 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1016 19:41:42.857040  475347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1016 19:41:42.865065  475347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1016 19:41:42.873742  475347 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1016 19:41:42.873854  475347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 19:41:42.881905  475347 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1016 19:41:42.928076  475347 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1016 19:41:42.928463  475347 kubeadm.go:318] [preflight] Running pre-flight checks
	I1016 19:41:42.960813  475347 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1016 19:41:42.960975  475347 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1016 19:41:42.961042  475347 kubeadm.go:318] OS: Linux
	I1016 19:41:42.961123  475347 kubeadm.go:318] CGROUPS_CPU: enabled
	I1016 19:41:42.961198  475347 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1016 19:41:42.961251  475347 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1016 19:41:42.961302  475347 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1016 19:41:42.961353  475347 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1016 19:41:42.961410  475347 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1016 19:41:42.961458  475347 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1016 19:41:42.961509  475347 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1016 19:41:42.961559  475347 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1016 19:41:43.050436  475347 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1016 19:41:43.050650  475347 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1016 19:41:43.050794  475347 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1016 19:41:43.060942  475347 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1016 19:41:39.758783  474835 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.338467761s)
	I1016 19:41:39.758863  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 19:41:39.758994  474835 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.3387956s)
	I1016 19:41:39.759012  474835 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1016 19:41:39.759031  474835 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1016 19:41:39.759060  474835 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1016 19:41:41.567027  474835 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.807940331s)
	I1016 19:41:41.567050  474835 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1016 19:41:41.567067  474835 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1016 19:41:41.567115  474835 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1016 19:41:41.567167  474835 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.808291989s)
	I1016 19:41:41.567195  474835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 19:41:43.141318  474835 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.57410353s)
	I1016 19:41:43.141361  474835 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1016 19:41:43.141449  474835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1016 19:41:43.141585  474835 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.574459963s)
	I1016 19:41:43.141600  474835 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1016 19:41:43.141616  474835 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1016 19:41:43.141652  474835 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1016 19:41:43.066304  475347 out.go:252]   - Generating certificates and keys ...
	I1016 19:41:43.066455  475347 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 19:41:43.066560  475347 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 19:41:43.373836  475347 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1016 19:41:43.870354  475347 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1016 19:41:44.682835  475347 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1016 19:41:44.946485  475347 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1016 19:41:45.141553  474835 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.999872086s)
	I1016 19:41:45.141586  474835 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1016 19:41:45.141607  474835 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1016 19:41:45.141686  474835 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1016 19:41:45.142408  474835 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.000932594s)
	I1016 19:41:45.142452  474835 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1016 19:41:45.142485  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1016 19:41:45.704242  475347 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1016 19:41:45.704468  475347 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-751669 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1016 19:41:46.349519  475347 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1016 19:41:46.349674  475347 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-751669 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1016 19:41:46.658345  475347 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 19:41:47.863606  475347 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 19:41:48.454872  475347 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 19:41:48.455118  475347 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 19:41:49.487802  475347 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 19:41:49.988543  474835 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.846828376s)
	I1016 19:41:49.988618  474835 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1016 19:41:49.988651  474835 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1016 19:41:49.988729  474835 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1016 19:41:50.700026  474835 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1016 19:41:50.700109  474835 cache_images.go:124] Successfully loaded all cached images
	I1016 19:41:50.700130  474835 cache_images.go:93] duration metric: took 16.783044199s to LoadCachedImages
	I1016 19:41:50.700174  474835 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1016 19:41:50.700303  474835 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-225696 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-225696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 19:41:50.700422  474835 ssh_runner.go:195] Run: crio config
	I1016 19:41:50.793683  474835 cni.go:84] Creating CNI manager for ""
	I1016 19:41:50.793716  474835 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:41:50.793738  474835 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 19:41:50.793763  474835 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-225696 NodeName:no-preload-225696 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 19:41:50.793883  474835 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-225696"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 19:41:50.793955  474835 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 19:41:50.803333  474835 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1016 19:41:50.803397  474835 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1016 19:41:50.817500  474835 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1016 19:41:50.817594  474835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1016 19:41:50.817768  474835 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21738-288457/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1016 19:41:50.818038  474835 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21738-288457/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1016 19:41:50.822861  474835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1016 19:41:50.822936  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1016 19:41:51.760921  474835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:41:51.775122  474835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1016 19:41:51.779948  474835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1016 19:41:51.780036  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1016 19:41:51.904531  474835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1016 19:41:51.934758  474835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1016 19:41:51.934801  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1016 19:41:52.477651  474835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 19:41:52.490278  474835 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1016 19:41:52.514206  474835 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 19:41:52.533417  474835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1016 19:41:52.547947  474835 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1016 19:41:52.551573  474835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:41:52.564042  474835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:41:52.711006  474835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:41:52.736753  474835 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696 for IP: 192.168.76.2
	I1016 19:41:52.736775  474835 certs.go:195] generating shared ca certs ...
	I1016 19:41:52.736791  474835 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:52.736926  474835 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 19:41:52.736970  474835 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 19:41:52.736981  474835 certs.go:257] generating profile certs ...
	I1016 19:41:52.737033  474835 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/client.key
	I1016 19:41:52.737051  474835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/client.crt with IP's: []
	I1016 19:41:50.581793  475347 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 19:41:51.234279  475347 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 19:41:51.879990  475347 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 19:41:53.240799  475347 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 19:41:53.247468  475347 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 19:41:53.255432  475347 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 19:41:53.260367  475347 out.go:252]   - Booting up control plane ...
	I1016 19:41:53.272449  475347 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 19:41:53.274141  475347 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 19:41:53.275599  475347 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 19:41:53.314030  475347 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 19:41:53.314142  475347 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 19:41:53.321914  475347 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 19:41:53.322041  475347 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 19:41:53.322103  475347 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 19:41:53.482771  475347 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 19:41:53.482890  475347 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 19:41:53.736464  474835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/client.crt ...
	I1016 19:41:53.736500  474835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/client.crt: {Name:mkdb981dc8b87598393bc3bc1f072558cab544f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:53.736705  474835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/client.key ...
	I1016 19:41:53.736719  474835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/client.key: {Name:mkcef89ef97c22e5412113ca17298061a41877a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:53.736812  474835 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/apiserver.key.17192573
	I1016 19:41:53.736833  474835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/apiserver.crt.17192573 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1016 19:41:54.162289  474835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/apiserver.crt.17192573 ...
	I1016 19:41:54.162319  474835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/apiserver.crt.17192573: {Name:mk424ccfd75fb4cb7884a4d3cd20d7c175902ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:54.162516  474835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/apiserver.key.17192573 ...
	I1016 19:41:54.162532  474835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/apiserver.key.17192573: {Name:mkb99fac01b0554ba1ebe2fe6a7eba28c4a5ebc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:54.162618  474835 certs.go:382] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/apiserver.crt.17192573 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/apiserver.crt
	I1016 19:41:54.162693  474835 certs.go:386] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/apiserver.key.17192573 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/apiserver.key
	I1016 19:41:54.162757  474835 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/proxy-client.key
	I1016 19:41:54.162776  474835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/proxy-client.crt with IP's: []
	I1016 19:41:55.279414  474835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/proxy-client.crt ...
	I1016 19:41:55.279442  474835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/proxy-client.crt: {Name:mk7d88f5e50f0fbc8ff7540af1390aa80d5e0be5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:55.279625  474835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/proxy-client.key ...
	I1016 19:41:55.279640  474835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/proxy-client.key: {Name:mkbd25f969e43aff44a3ff79a8be6e47d2a12e99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:41:55.279833  474835 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 19:41:55.279878  474835 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 19:41:55.279892  474835 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 19:41:55.279916  474835 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 19:41:55.279945  474835 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 19:41:55.279969  474835 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 19:41:55.280013  474835 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:41:55.280552  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 19:41:55.314729  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 19:41:55.334421  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 19:41:55.359546  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 19:41:55.378553  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1016 19:41:55.397617  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 19:41:55.417054  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 19:41:55.438931  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1016 19:41:55.457436  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 19:41:55.476869  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 19:41:55.499190  474835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 19:41:55.516916  474835 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 19:41:55.530459  474835 ssh_runner.go:195] Run: openssl version
	I1016 19:41:55.536885  474835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 19:41:55.545503  474835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 19:41:55.549210  474835 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 19:41:55.549353  474835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 19:41:55.590360  474835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 19:41:55.598683  474835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 19:41:55.606888  474835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:41:55.610643  474835 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:41:55.610707  474835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:41:55.651715  474835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 19:41:55.660116  474835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 19:41:55.668375  474835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 19:41:55.671967  474835 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 19:41:55.672053  474835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 19:41:55.713268  474835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 19:41:55.723390  474835 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 19:41:55.727252  474835 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1016 19:41:55.727305  474835 kubeadm.go:400] StartCluster: {Name:no-preload-225696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-225696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:41:55.727379  474835 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 19:41:55.727442  474835 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 19:41:55.761940  474835 cri.go:89] found id: ""
	I1016 19:41:55.762069  474835 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 19:41:55.779033  474835 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 19:41:55.792577  474835 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1016 19:41:55.792696  474835 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 19:41:55.804072  474835 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1016 19:41:55.804142  474835 kubeadm.go:157] found existing configuration files:
	
	I1016 19:41:55.804233  474835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1016 19:41:55.813360  474835 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1016 19:41:55.813496  474835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1016 19:41:55.821707  474835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1016 19:41:55.830867  474835 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1016 19:41:55.831005  474835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 19:41:55.839224  474835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1016 19:41:55.854249  474835 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1016 19:41:55.854363  474835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1016 19:41:55.868022  474835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1016 19:41:55.877744  474835 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1016 19:41:55.877863  474835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 19:41:55.885320  474835 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1016 19:41:56.003004  474835 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1016 19:41:56.003340  474835 kubeadm.go:318] [preflight] Running pre-flight checks
	I1016 19:41:56.036915  474835 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1016 19:41:56.036994  474835 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1016 19:41:56.037038  474835 kubeadm.go:318] OS: Linux
	I1016 19:41:56.037091  474835 kubeadm.go:318] CGROUPS_CPU: enabled
	I1016 19:41:56.037176  474835 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1016 19:41:56.037232  474835 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1016 19:41:56.037286  474835 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1016 19:41:56.037341  474835 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1016 19:41:56.037395  474835 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1016 19:41:56.037445  474835 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1016 19:41:56.037499  474835 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1016 19:41:56.037551  474835 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1016 19:41:56.126027  474835 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1016 19:41:56.126145  474835 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1016 19:41:56.126248  474835 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1016 19:41:56.153763  474835 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1016 19:41:56.159338  474835 out.go:252]   - Generating certificates and keys ...
	I1016 19:41:56.159500  474835 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 19:41:56.159613  474835 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 19:41:56.529304  474835 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1016 19:41:57.433149  474835 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1016 19:41:57.982538  474835 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1016 19:41:55.484457  475347 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001769731s
	I1016 19:41:55.487981  475347 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 19:41:55.488089  475347 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1016 19:41:55.488349  475347 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 19:41:55.488483  475347 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 19:41:58.754772  474835 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1016 19:41:59.034115  474835 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1016 19:41:59.034542  474835 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-225696] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1016 19:41:59.769500  474835 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1016 19:41:59.769668  474835 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-225696] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1016 19:42:01.186795  474835 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 19:42:02.039277  474835 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 19:42:02.120593  474835 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 19:42:02.121124  474835 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 19:42:02.681511  474835 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 19:42:02.834494  474835 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 19:42:03.161513  474835 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 19:42:04.033686  474835 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 19:42:04.327406  474835 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 19:42:04.328486  474835 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 19:42:04.331743  474835 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 19:42:00.580551  475347 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.091984706s
	I1016 19:42:04.422327  475347 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.934300907s
	I1016 19:42:05.989713  475347 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.501633839s
	I1016 19:42:06.020777  475347 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 19:42:06.042384  475347 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 19:42:06.056949  475347 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 19:42:06.057154  475347 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-751669 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 19:42:06.071122  475347 kubeadm.go:318] [bootstrap-token] Using token: pijw9n.od0s58ewxiq9hczg
	I1016 19:42:06.074068  475347 out.go:252]   - Configuring RBAC rules ...
	I1016 19:42:06.074196  475347 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 19:42:06.079481  475347 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 19:42:06.089383  475347 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 19:42:06.094489  475347 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 19:42:06.101856  475347 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 19:42:06.106548  475347 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 19:42:06.402798  475347 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 19:42:06.926759  475347 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 19:42:07.396718  475347 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 19:42:07.398039  475347 kubeadm.go:318] 
	I1016 19:42:07.398124  475347 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 19:42:07.398136  475347 kubeadm.go:318] 
	I1016 19:42:07.398212  475347 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 19:42:07.398221  475347 kubeadm.go:318] 
	I1016 19:42:07.398247  475347 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 19:42:07.398310  475347 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 19:42:07.398363  475347 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 19:42:07.398371  475347 kubeadm.go:318] 
	I1016 19:42:07.398425  475347 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 19:42:07.398434  475347 kubeadm.go:318] 
	I1016 19:42:07.398481  475347 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 19:42:07.398489  475347 kubeadm.go:318] 
	I1016 19:42:07.398541  475347 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 19:42:07.398619  475347 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 19:42:07.398690  475347 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 19:42:07.398699  475347 kubeadm.go:318] 
	I1016 19:42:07.398782  475347 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 19:42:07.398862  475347 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 19:42:07.398871  475347 kubeadm.go:318] 
	I1016 19:42:07.398955  475347 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token pijw9n.od0s58ewxiq9hczg \
	I1016 19:42:07.399060  475347 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:85a5ed38e8d77f9704e1d1edb99d03efd3921606f6351dbe0cfb02fc48526847 \
	I1016 19:42:07.399086  475347 kubeadm.go:318] 	--control-plane 
	I1016 19:42:07.399095  475347 kubeadm.go:318] 
	I1016 19:42:07.399178  475347 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 19:42:07.399193  475347 kubeadm.go:318] 
	I1016 19:42:07.399275  475347 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token pijw9n.od0s58ewxiq9hczg \
	I1016 19:42:07.399379  475347 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:85a5ed38e8d77f9704e1d1edb99d03efd3921606f6351dbe0cfb02fc48526847 
	I1016 19:42:07.403811  475347 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1016 19:42:07.404041  475347 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1016 19:42:07.404150  475347 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1016 19:42:07.404171  475347 cni.go:84] Creating CNI manager for ""
	I1016 19:42:07.404182  475347 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:42:07.409007  475347 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 19:42:04.336259  474835 out.go:252]   - Booting up control plane ...
	I1016 19:42:04.336372  474835 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 19:42:04.338660  474835 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 19:42:04.341740  474835 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 19:42:04.379029  474835 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 19:42:04.379141  474835 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 19:42:04.391621  474835 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 19:42:04.391725  474835 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 19:42:04.391766  474835 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 19:42:04.591545  474835 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 19:42:04.591670  474835 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 19:42:06.592758  474835 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001454213s
	I1016 19:42:06.596149  474835 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 19:42:06.596250  474835 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1016 19:42:06.596600  474835 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 19:42:06.596702  474835 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 19:42:07.411962  475347 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 19:42:07.417804  475347 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 19:42:07.417830  475347 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 19:42:07.450230  475347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 19:42:07.922556  475347 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 19:42:07.922691  475347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:07.922757  475347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-751669 minikube.k8s.io/updated_at=2025_10_16T19_42_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=embed-certs-751669 minikube.k8s.io/primary=true
	I1016 19:42:08.448431  475347 ops.go:34] apiserver oom_adj: -16
	I1016 19:42:08.448566  475347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:08.948662  475347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:09.449369  475347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:09.948628  475347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:10.865936  474835 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.269437946s
	I1016 19:42:11.610441  474835 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.014272327s
	I1016 19:42:10.449314  475347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:10.948705  475347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:11.448872  475347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:11.949582  475347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:12.449078  475347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:12.949092  475347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:13.316524  475347 kubeadm.go:1113] duration metric: took 5.393875256s to wait for elevateKubeSystemPrivileges
	I1016 19:42:13.316551  475347 kubeadm.go:402] duration metric: took 30.589727928s to StartCluster
	I1016 19:42:13.316568  475347 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:42:13.316628  475347 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:42:13.317699  475347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:42:13.317925  475347 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:42:13.318201  475347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 19:42:13.318437  475347 config.go:182] Loaded profile config "embed-certs-751669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:42:13.318468  475347 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 19:42:13.318526  475347 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-751669"
	I1016 19:42:13.318541  475347 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-751669"
	I1016 19:42:13.318567  475347 host.go:66] Checking if "embed-certs-751669" exists ...
	I1016 19:42:13.319077  475347 cli_runner.go:164] Run: docker container inspect embed-certs-751669 --format={{.State.Status}}
	I1016 19:42:13.319449  475347 addons.go:69] Setting default-storageclass=true in profile "embed-certs-751669"
	I1016 19:42:13.319468  475347 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-751669"
	I1016 19:42:13.319730  475347 cli_runner.go:164] Run: docker container inspect embed-certs-751669 --format={{.State.Status}}
	I1016 19:42:13.324630  475347 out.go:179] * Verifying Kubernetes components...
	I1016 19:42:13.328514  475347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:42:13.360049  475347 addons.go:238] Setting addon default-storageclass=true in "embed-certs-751669"
	I1016 19:42:13.360088  475347 host.go:66] Checking if "embed-certs-751669" exists ...
	I1016 19:42:13.362429  475347 cli_runner.go:164] Run: docker container inspect embed-certs-751669 --format={{.State.Status}}
	I1016 19:42:13.366393  475347 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 19:42:14.098920  474835 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.502722684s
	I1016 19:42:14.127124  474835 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 19:42:14.146393  474835 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 19:42:14.168939  474835 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 19:42:14.169454  474835 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-225696 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 19:42:14.193870  474835 kubeadm.go:318] [bootstrap-token] Using token: p5om8o.t1b9rhhxpencyzhg
	I1016 19:42:13.371303  475347 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:42:13.371327  475347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 19:42:13.371404  475347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:42:13.385128  475347 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 19:42:13.385197  475347 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 19:42:13.385268  475347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:42:13.419948  475347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:42:13.420995  475347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:42:13.741348  475347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 19:42:13.774859  475347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 19:42:13.823350  475347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:42:13.864281  475347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:42:14.336101  475347 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1016 19:42:14.339883  475347 node_ready.go:35] waiting up to 6m0s for node "embed-certs-751669" to be "Ready" ...
	I1016 19:42:14.770139  475347 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1016 19:42:14.773102  475347 addons.go:514] duration metric: took 1.454616607s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1016 19:42:14.840040  475347 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-751669" context rescaled to 1 replicas
	I1016 19:42:14.196799  474835 out.go:252]   - Configuring RBAC rules ...
	I1016 19:42:14.196925  474835 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 19:42:14.206264  474835 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 19:42:14.217993  474835 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 19:42:14.222963  474835 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 19:42:14.228766  474835 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 19:42:14.236016  474835 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 19:42:14.510937  474835 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 19:42:15.023895  474835 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 19:42:15.506394  474835 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 19:42:15.507651  474835 kubeadm.go:318] 
	I1016 19:42:15.507729  474835 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 19:42:15.507734  474835 kubeadm.go:318] 
	I1016 19:42:15.507816  474835 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 19:42:15.507822  474835 kubeadm.go:318] 
	I1016 19:42:15.507848  474835 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 19:42:15.507909  474835 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 19:42:15.507962  474835 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 19:42:15.507967  474835 kubeadm.go:318] 
	I1016 19:42:15.508024  474835 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 19:42:15.508029  474835 kubeadm.go:318] 
	I1016 19:42:15.508078  474835 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 19:42:15.508083  474835 kubeadm.go:318] 
	I1016 19:42:15.508137  474835 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 19:42:15.508215  474835 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 19:42:15.508286  474835 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 19:42:15.508291  474835 kubeadm.go:318] 
	I1016 19:42:15.508380  474835 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 19:42:15.508460  474835 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 19:42:15.508465  474835 kubeadm.go:318] 
	I1016 19:42:15.508553  474835 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token p5om8o.t1b9rhhxpencyzhg \
	I1016 19:42:15.508661  474835 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:85a5ed38e8d77f9704e1d1edb99d03efd3921606f6351dbe0cfb02fc48526847 \
	I1016 19:42:15.508683  474835 kubeadm.go:318] 	--control-plane 
	I1016 19:42:15.508687  474835 kubeadm.go:318] 
	I1016 19:42:15.508776  474835 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 19:42:15.508780  474835 kubeadm.go:318] 
	I1016 19:42:15.508974  474835 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token p5om8o.t1b9rhhxpencyzhg \
	I1016 19:42:15.509088  474835 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:85a5ed38e8d77f9704e1d1edb99d03efd3921606f6351dbe0cfb02fc48526847 
	I1016 19:42:15.511413  474835 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1016 19:42:15.511660  474835 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1016 19:42:15.511773  474835 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1016 19:42:15.511797  474835 cni.go:84] Creating CNI manager for ""
	I1016 19:42:15.511810  474835 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:42:15.514968  474835 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 19:42:15.517994  474835 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 19:42:15.524432  474835 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 19:42:15.524452  474835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 19:42:15.546362  474835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 19:42:15.860563  474835 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 19:42:15.860715  474835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:15.860767  474835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-225696 minikube.k8s.io/updated_at=2025_10_16T19_42_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=no-preload-225696 minikube.k8s.io/primary=true
	I1016 19:42:16.039221  474835 ops.go:34] apiserver oom_adj: -16
	I1016 19:42:16.039395  474835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:16.539482  474835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:17.039800  474835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:17.540038  474835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:18.040443  474835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1016 19:42:16.342761  475347 node_ready.go:57] node "embed-certs-751669" has "Ready":"False" status (will retry)
	W1016 19:42:18.343256  475347 node_ready.go:57] node "embed-certs-751669" has "Ready":"False" status (will retry)
	I1016 19:42:18.540320  474835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:19.039813  474835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:19.539518  474835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:20.039683  474835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:20.539828  474835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:42:20.640562  474835 kubeadm.go:1113] duration metric: took 4.77991017s to wait for elevateKubeSystemPrivileges
	I1016 19:42:20.640590  474835 kubeadm.go:402] duration metric: took 24.913289089s to StartCluster
	I1016 19:42:20.640609  474835 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:42:20.640672  474835 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:42:20.642329  474835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:42:20.642566  474835 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:42:20.642892  474835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 19:42:20.643398  474835 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 19:42:20.643488  474835 addons.go:69] Setting storage-provisioner=true in profile "no-preload-225696"
	I1016 19:42:20.643504  474835 addons.go:238] Setting addon storage-provisioner=true in "no-preload-225696"
	I1016 19:42:20.643528  474835 host.go:66] Checking if "no-preload-225696" exists ...
	I1016 19:42:20.643540  474835 config.go:182] Loaded profile config "no-preload-225696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:42:20.643576  474835 addons.go:69] Setting default-storageclass=true in profile "no-preload-225696"
	I1016 19:42:20.643588  474835 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-225696"
	I1016 19:42:20.643871  474835 cli_runner.go:164] Run: docker container inspect no-preload-225696 --format={{.State.Status}}
	I1016 19:42:20.644025  474835 cli_runner.go:164] Run: docker container inspect no-preload-225696 --format={{.State.Status}}
	I1016 19:42:20.651125  474835 out.go:179] * Verifying Kubernetes components...
	I1016 19:42:20.657481  474835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:42:20.686836  474835 addons.go:238] Setting addon default-storageclass=true in "no-preload-225696"
	I1016 19:42:20.686878  474835 host.go:66] Checking if "no-preload-225696" exists ...
	I1016 19:42:20.687288  474835 cli_runner.go:164] Run: docker container inspect no-preload-225696 --format={{.State.Status}}
	I1016 19:42:20.687596  474835 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 19:42:20.691094  474835 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:42:20.691118  474835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 19:42:20.691197  474835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-225696
	I1016 19:42:20.715086  474835 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 19:42:20.715112  474835 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 19:42:20.715177  474835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-225696
	I1016 19:42:20.737709  474835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/no-preload-225696/id_rsa Username:docker}
	I1016 19:42:20.763406  474835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/no-preload-225696/id_rsa Username:docker}
	I1016 19:42:21.073040  474835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 19:42:21.115126  474835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:42:21.115327  474835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 19:42:21.136022  474835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:42:21.811946  474835 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1016 19:42:21.814604  474835 node_ready.go:35] waiting up to 6m0s for node "no-preload-225696" to be "Ready" ...
	I1016 19:42:22.186563  474835 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.050501921s)
	I1016 19:42:22.189793  474835 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1016 19:42:22.192642  474835 addons.go:514] duration metric: took 1.549228413s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1016 19:42:22.317276  474835 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-225696" context rescaled to 1 replicas
	W1016 19:42:20.344106  475347 node_ready.go:57] node "embed-certs-751669" has "Ready":"False" status (will retry)
	W1016 19:42:22.843239  475347 node_ready.go:57] node "embed-certs-751669" has "Ready":"False" status (will retry)
	W1016 19:42:24.844419  475347 node_ready.go:57] node "embed-certs-751669" has "Ready":"False" status (will retry)
	W1016 19:42:23.818354  474835 node_ready.go:57] node "no-preload-225696" has "Ready":"False" status (will retry)
	W1016 19:42:25.818435  474835 node_ready.go:57] node "no-preload-225696" has "Ready":"False" status (will retry)
	W1016 19:42:27.342978  475347 node_ready.go:57] node "embed-certs-751669" has "Ready":"False" status (will retry)
	W1016 19:42:29.843614  475347 node_ready.go:57] node "embed-certs-751669" has "Ready":"False" status (will retry)
	W1016 19:42:28.319219  474835 node_ready.go:57] node "no-preload-225696" has "Ready":"False" status (will retry)
	W1016 19:42:30.818123  474835 node_ready.go:57] node "no-preload-225696" has "Ready":"False" status (will retry)
	W1016 19:42:32.343302  475347 node_ready.go:57] node "embed-certs-751669" has "Ready":"False" status (will retry)
	W1016 19:42:34.343675  475347 node_ready.go:57] node "embed-certs-751669" has "Ready":"False" status (will retry)
	W1016 19:42:33.318182  474835 node_ready.go:57] node "no-preload-225696" has "Ready":"False" status (will retry)
	I1016 19:42:34.821178  474835 node_ready.go:49] node "no-preload-225696" is "Ready"
	I1016 19:42:34.821210  474835 node_ready.go:38] duration metric: took 13.006528088s for node "no-preload-225696" to be "Ready" ...
	I1016 19:42:34.821225  474835 api_server.go:52] waiting for apiserver process to appear ...
	I1016 19:42:34.821285  474835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 19:42:34.839368  474835 api_server.go:72] duration metric: took 14.196770986s to wait for apiserver process to appear ...
	I1016 19:42:34.839393  474835 api_server.go:88] waiting for apiserver healthz status ...
	I1016 19:42:34.839426  474835 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 19:42:34.851098  474835 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1016 19:42:34.852234  474835 api_server.go:141] control plane version: v1.34.1
	I1016 19:42:34.852256  474835 api_server.go:131] duration metric: took 12.855618ms to wait for apiserver health ...
	I1016 19:42:34.852265  474835 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 19:42:34.861261  474835 system_pods.go:59] 8 kube-system pods found
	I1016 19:42:34.861296  474835 system_pods.go:61] "coredns-66bc5c9577-jr55z" [c09261b9-4ebb-417f-8fae-3957ed09e35d] Pending
	I1016 19:42:34.861303  474835 system_pods.go:61] "etcd-no-preload-225696" [9063344e-e43a-45b1-9ef8-131253095920] Running
	I1016 19:42:34.861310  474835 system_pods.go:61] "kindnet-kfg52" [6016f902-4dbc-47f8-b054-38453bfd865d] Running
	I1016 19:42:34.861314  474835 system_pods.go:61] "kube-apiserver-no-preload-225696" [66019d1a-b73d-4e89-9ef9-101a35d64010] Running
	I1016 19:42:34.861320  474835 system_pods.go:61] "kube-controller-manager-no-preload-225696" [887bd998-d994-41cd-952b-0fb4d7e352da] Running
	I1016 19:42:34.861324  474835 system_pods.go:61] "kube-proxy-m86rv" [81b760ca-5d1a-415d-9ea3-e0595c050e9c] Running
	I1016 19:42:34.861329  474835 system_pods.go:61] "kube-scheduler-no-preload-225696" [7ef2be5d-3673-45c7-b0e8-5fc03eed21c0] Running
	I1016 19:42:34.861337  474835 system_pods.go:61] "storage-provisioner" [bcf3f7a7-6bc7-461a-a496-7d4467675e67] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:42:34.861347  474835 system_pods.go:74] duration metric: took 9.076495ms to wait for pod list to return data ...
	I1016 19:42:34.861358  474835 default_sa.go:34] waiting for default service account to be created ...
	I1016 19:42:34.874030  474835 default_sa.go:45] found service account: "default"
	I1016 19:42:34.874055  474835 default_sa.go:55] duration metric: took 12.681274ms for default service account to be created ...
	I1016 19:42:34.874065  474835 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 19:42:34.879802  474835 system_pods.go:86] 8 kube-system pods found
	I1016 19:42:34.879843  474835 system_pods.go:89] "coredns-66bc5c9577-jr55z" [c09261b9-4ebb-417f-8fae-3957ed09e35d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:42:34.879850  474835 system_pods.go:89] "etcd-no-preload-225696" [9063344e-e43a-45b1-9ef8-131253095920] Running
	I1016 19:42:34.879858  474835 system_pods.go:89] "kindnet-kfg52" [6016f902-4dbc-47f8-b054-38453bfd865d] Running
	I1016 19:42:34.879863  474835 system_pods.go:89] "kube-apiserver-no-preload-225696" [66019d1a-b73d-4e89-9ef9-101a35d64010] Running
	I1016 19:42:34.879868  474835 system_pods.go:89] "kube-controller-manager-no-preload-225696" [887bd998-d994-41cd-952b-0fb4d7e352da] Running
	I1016 19:42:34.879872  474835 system_pods.go:89] "kube-proxy-m86rv" [81b760ca-5d1a-415d-9ea3-e0595c050e9c] Running
	I1016 19:42:34.879876  474835 system_pods.go:89] "kube-scheduler-no-preload-225696" [7ef2be5d-3673-45c7-b0e8-5fc03eed21c0] Running
	I1016 19:42:34.879885  474835 system_pods.go:89] "storage-provisioner" [bcf3f7a7-6bc7-461a-a496-7d4467675e67] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:42:34.879906  474835 retry.go:31] will retry after 282.728835ms: missing components: kube-dns
	I1016 19:42:35.167458  474835 system_pods.go:86] 8 kube-system pods found
	I1016 19:42:35.167493  474835 system_pods.go:89] "coredns-66bc5c9577-jr55z" [c09261b9-4ebb-417f-8fae-3957ed09e35d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:42:35.167500  474835 system_pods.go:89] "etcd-no-preload-225696" [9063344e-e43a-45b1-9ef8-131253095920] Running
	I1016 19:42:35.167507  474835 system_pods.go:89] "kindnet-kfg52" [6016f902-4dbc-47f8-b054-38453bfd865d] Running
	I1016 19:42:35.167511  474835 system_pods.go:89] "kube-apiserver-no-preload-225696" [66019d1a-b73d-4e89-9ef9-101a35d64010] Running
	I1016 19:42:35.167516  474835 system_pods.go:89] "kube-controller-manager-no-preload-225696" [887bd998-d994-41cd-952b-0fb4d7e352da] Running
	I1016 19:42:35.167520  474835 system_pods.go:89] "kube-proxy-m86rv" [81b760ca-5d1a-415d-9ea3-e0595c050e9c] Running
	I1016 19:42:35.167523  474835 system_pods.go:89] "kube-scheduler-no-preload-225696" [7ef2be5d-3673-45c7-b0e8-5fc03eed21c0] Running
	I1016 19:42:35.167532  474835 system_pods.go:89] "storage-provisioner" [bcf3f7a7-6bc7-461a-a496-7d4467675e67] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:42:35.167557  474835 retry.go:31] will retry after 310.945409ms: missing components: kube-dns
	I1016 19:42:35.482801  474835 system_pods.go:86] 8 kube-system pods found
	I1016 19:42:35.482839  474835 system_pods.go:89] "coredns-66bc5c9577-jr55z" [c09261b9-4ebb-417f-8fae-3957ed09e35d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:42:35.482848  474835 system_pods.go:89] "etcd-no-preload-225696" [9063344e-e43a-45b1-9ef8-131253095920] Running
	I1016 19:42:35.482854  474835 system_pods.go:89] "kindnet-kfg52" [6016f902-4dbc-47f8-b054-38453bfd865d] Running
	I1016 19:42:35.482859  474835 system_pods.go:89] "kube-apiserver-no-preload-225696" [66019d1a-b73d-4e89-9ef9-101a35d64010] Running
	I1016 19:42:35.482864  474835 system_pods.go:89] "kube-controller-manager-no-preload-225696" [887bd998-d994-41cd-952b-0fb4d7e352da] Running
	I1016 19:42:35.482869  474835 system_pods.go:89] "kube-proxy-m86rv" [81b760ca-5d1a-415d-9ea3-e0595c050e9c] Running
	I1016 19:42:35.482873  474835 system_pods.go:89] "kube-scheduler-no-preload-225696" [7ef2be5d-3673-45c7-b0e8-5fc03eed21c0] Running
	I1016 19:42:35.482877  474835 system_pods.go:89] "storage-provisioner" [bcf3f7a7-6bc7-461a-a496-7d4467675e67] Running
	I1016 19:42:35.482884  474835 system_pods.go:126] duration metric: took 608.814004ms to wait for k8s-apps to be running ...
	I1016 19:42:35.482891  474835 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 19:42:35.482950  474835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:42:35.504864  474835 system_svc.go:56] duration metric: took 21.950066ms WaitForService to wait for kubelet
	I1016 19:42:35.504894  474835 kubeadm.go:586] duration metric: took 14.862302331s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:42:35.504915  474835 node_conditions.go:102] verifying NodePressure condition ...
	I1016 19:42:35.509263  474835 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 19:42:35.509345  474835 node_conditions.go:123] node cpu capacity is 2
	I1016 19:42:35.509373  474835 node_conditions.go:105] duration metric: took 4.452171ms to run NodePressure ...
	I1016 19:42:35.509400  474835 start.go:241] waiting for startup goroutines ...
	I1016 19:42:35.509431  474835 start.go:246] waiting for cluster config update ...
	I1016 19:42:35.509462  474835 start.go:255] writing updated cluster config ...
	I1016 19:42:35.509811  474835 ssh_runner.go:195] Run: rm -f paused
	I1016 19:42:35.515578  474835 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:42:35.520239  474835 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jr55z" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:42:36.526601  474835 pod_ready.go:94] pod "coredns-66bc5c9577-jr55z" is "Ready"
	I1016 19:42:36.526631  474835 pod_ready.go:86] duration metric: took 1.00636545s for pod "coredns-66bc5c9577-jr55z" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:42:36.536164  474835 pod_ready.go:83] waiting for pod "etcd-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:42:36.541276  474835 pod_ready.go:94] pod "etcd-no-preload-225696" is "Ready"
	I1016 19:42:36.541354  474835 pod_ready.go:86] duration metric: took 5.164621ms for pod "etcd-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:42:36.544127  474835 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:42:36.548952  474835 pod_ready.go:94] pod "kube-apiserver-no-preload-225696" is "Ready"
	I1016 19:42:36.549031  474835 pod_ready.go:86] duration metric: took 4.879228ms for pod "kube-apiserver-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:42:36.551554  474835 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:42:36.724394  474835 pod_ready.go:94] pod "kube-controller-manager-no-preload-225696" is "Ready"
	I1016 19:42:36.724426  474835 pod_ready.go:86] duration metric: took 172.843064ms for pod "kube-controller-manager-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:42:36.924863  474835 pod_ready.go:83] waiting for pod "kube-proxy-m86rv" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:42:37.323818  474835 pod_ready.go:94] pod "kube-proxy-m86rv" is "Ready"
	I1016 19:42:37.323845  474835 pod_ready.go:86] duration metric: took 398.951829ms for pod "kube-proxy-m86rv" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:42:37.524545  474835 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:42:37.923719  474835 pod_ready.go:94] pod "kube-scheduler-no-preload-225696" is "Ready"
	I1016 19:42:37.923746  474835 pod_ready.go:86] duration metric: took 399.131673ms for pod "kube-scheduler-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:42:37.923758  474835 pod_ready.go:40] duration metric: took 2.408103802s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:42:37.976962  474835 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1016 19:42:37.980442  474835 out.go:179] * Done! kubectl is now configured to use "no-preload-225696" cluster and "default" namespace by default
	W1016 19:42:36.842606  475347 node_ready.go:57] node "embed-certs-751669" has "Ready":"False" status (will retry)
	W1016 19:42:38.842840  475347 node_ready.go:57] node "embed-certs-751669" has "Ready":"False" status (will retry)
	W1016 19:42:40.843022  475347 node_ready.go:57] node "embed-certs-751669" has "Ready":"False" status (will retry)
	W1016 19:42:43.342664  475347 node_ready.go:57] node "embed-certs-751669" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 16 19:42:35 no-preload-225696 crio[840]: time="2025-10-16T19:42:35.202329674Z" level=info msg="Created container f810245829676f3987273ecafa6ad173f79ce5661edb6f7a4c52b3fe41a6d3bc: kube-system/coredns-66bc5c9577-jr55z/coredns" id=8bc943b5-9d1b-43e6-b790-2df0e20b3132 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:42:35 no-preload-225696 crio[840]: time="2025-10-16T19:42:35.20350548Z" level=info msg="Starting container: f810245829676f3987273ecafa6ad173f79ce5661edb6f7a4c52b3fe41a6d3bc" id=13033c69-6156-4004-bae4-a5dd90fba4b2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:42:35 no-preload-225696 crio[840]: time="2025-10-16T19:42:35.206060541Z" level=info msg="Started container" PID=2502 containerID=f810245829676f3987273ecafa6ad173f79ce5661edb6f7a4c52b3fe41a6d3bc description=kube-system/coredns-66bc5c9577-jr55z/coredns id=13033c69-6156-4004-bae4-a5dd90fba4b2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1ef55d29e8ace42f648772992c0d84240a8ae4b9e2daf438c06ea6fb5538135f
	Oct 16 19:42:38 no-preload-225696 crio[840]: time="2025-10-16T19:42:38.511308129Z" level=info msg="Running pod sandbox: default/busybox/POD" id=bb56b7b0-aae4-427c-a4ce-86afc64d38ff name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:42:38 no-preload-225696 crio[840]: time="2025-10-16T19:42:38.511388195Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:42:38 no-preload-225696 crio[840]: time="2025-10-16T19:42:38.521937575Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:904a3dd8f5307d13264347d61577a0a5c6ff11767e175a5877dc08f0fdaa750e UID:5e3658a2-5c39-4bba-8665-1c1d32931f47 NetNS:/var/run/netns/073b9e33-0dc7-4b6a-981b-3009b5276b19 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40012fc010}] Aliases:map[]}"
	Oct 16 19:42:38 no-preload-225696 crio[840]: time="2025-10-16T19:42:38.521977953Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 16 19:42:38 no-preload-225696 crio[840]: time="2025-10-16T19:42:38.53755482Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:904a3dd8f5307d13264347d61577a0a5c6ff11767e175a5877dc08f0fdaa750e UID:5e3658a2-5c39-4bba-8665-1c1d32931f47 NetNS:/var/run/netns/073b9e33-0dc7-4b6a-981b-3009b5276b19 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40012fc010}] Aliases:map[]}"
	Oct 16 19:42:38 no-preload-225696 crio[840]: time="2025-10-16T19:42:38.538002964Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 16 19:42:38 no-preload-225696 crio[840]: time="2025-10-16T19:42:38.541611328Z" level=info msg="Ran pod sandbox 904a3dd8f5307d13264347d61577a0a5c6ff11767e175a5877dc08f0fdaa750e with infra container: default/busybox/POD" id=bb56b7b0-aae4-427c-a4ce-86afc64d38ff name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:42:38 no-preload-225696 crio[840]: time="2025-10-16T19:42:38.543244936Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=919ef924-e51b-4c5c-9670-bb709755da29 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:42:38 no-preload-225696 crio[840]: time="2025-10-16T19:42:38.543554773Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=919ef924-e51b-4c5c-9670-bb709755da29 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:42:38 no-preload-225696 crio[840]: time="2025-10-16T19:42:38.543682364Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=919ef924-e51b-4c5c-9670-bb709755da29 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:42:38 no-preload-225696 crio[840]: time="2025-10-16T19:42:38.546657959Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=14ba4f4b-60cf-4352-9150-761a1fec4f1c name=/runtime.v1.ImageService/PullImage
	Oct 16 19:42:38 no-preload-225696 crio[840]: time="2025-10-16T19:42:38.550102965Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 16 19:42:40 no-preload-225696 crio[840]: time="2025-10-16T19:42:40.584364763Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=14ba4f4b-60cf-4352-9150-761a1fec4f1c name=/runtime.v1.ImageService/PullImage
	Oct 16 19:42:40 no-preload-225696 crio[840]: time="2025-10-16T19:42:40.585354655Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=58d60a2e-738f-49e5-94c3-b2a0080536b0 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:42:40 no-preload-225696 crio[840]: time="2025-10-16T19:42:40.587345009Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=72445c6d-ee1e-439a-8561-9315db72476b name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:42:40 no-preload-225696 crio[840]: time="2025-10-16T19:42:40.594860899Z" level=info msg="Creating container: default/busybox/busybox" id=5108bbf3-22a5-4db8-9993-a78d56b9f8a1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:42:40 no-preload-225696 crio[840]: time="2025-10-16T19:42:40.595969497Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:42:40 no-preload-225696 crio[840]: time="2025-10-16T19:42:40.605322082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:42:40 no-preload-225696 crio[840]: time="2025-10-16T19:42:40.606229102Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:42:40 no-preload-225696 crio[840]: time="2025-10-16T19:42:40.625381677Z" level=info msg="Created container 52bb87f05600e894b050824c50893330f9fe39a39b811de538320ccc505de6d8: default/busybox/busybox" id=5108bbf3-22a5-4db8-9993-a78d56b9f8a1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:42:40 no-preload-225696 crio[840]: time="2025-10-16T19:42:40.627999336Z" level=info msg="Starting container: 52bb87f05600e894b050824c50893330f9fe39a39b811de538320ccc505de6d8" id=471bf168-4136-43ca-a90b-d656820641d9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:42:40 no-preload-225696 crio[840]: time="2025-10-16T19:42:40.630486236Z" level=info msg="Started container" PID=2553 containerID=52bb87f05600e894b050824c50893330f9fe39a39b811de538320ccc505de6d8 description=default/busybox/busybox id=471bf168-4136-43ca-a90b-d656820641d9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=904a3dd8f5307d13264347d61577a0a5c6ff11767e175a5877dc08f0fdaa750e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	52bb87f05600e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   904a3dd8f5307       busybox                                     default
	f810245829676       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago      Running             coredns                   0                   1ef55d29e8ace       coredns-66bc5c9577-jr55z                    kube-system
	3c45938d20c71       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      13 seconds ago      Running             storage-provisioner       0                   2d6c862ec1a44       storage-provisioner                         kube-system
	54e8c77f76473       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   18d1f6428aa2d       kindnet-kfg52                               kube-system
	adcad4db221d6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      28 seconds ago      Running             kube-proxy                0                   20c0ac7fce1ca       kube-proxy-m86rv                            kube-system
	961017b09a002       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      41 seconds ago      Running             kube-controller-manager   0                   7fb9ebf728124       kube-controller-manager-no-preload-225696   kube-system
	3b29d9038bc16       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      41 seconds ago      Running             kube-scheduler            0                   573936ceae976       kube-scheduler-no-preload-225696            kube-system
	5d844531f0b16       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      41 seconds ago      Running             etcd                      0                   6b2ea6e094038       etcd-no-preload-225696                      kube-system
	97a04f8d2c2ed       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      41 seconds ago      Running             kube-apiserver            0                   d715ccb787ac5       kube-apiserver-no-preload-225696            kube-system
	
	
	==> coredns [f810245829676f3987273ecafa6ad173f79ce5661edb6f7a4c52b3fe41a6d3bc] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35534 - 53471 "HINFO IN 8550474476021800027.9016928124799343799. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022626553s
	
	
	==> describe nodes <==
	Name:               no-preload-225696
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-225696
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=no-preload-225696
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T19_42_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 19:42:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-225696
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:42:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:42:45 +0000   Thu, 16 Oct 2025 19:42:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:42:45 +0000   Thu, 16 Oct 2025 19:42:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:42:45 +0000   Thu, 16 Oct 2025 19:42:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 19:42:45 +0000   Thu, 16 Oct 2025 19:42:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-225696
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                7c11c781-d716-4555-8158-86dd5d9b993e
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-jr55z                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-no-preload-225696                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-kfg52                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-225696             250m (12%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-no-preload-225696    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-m86rv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-225696             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   Starting                 43s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 43s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  43s (x4 over 43s)  kubelet          Node no-preload-225696 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    43s (x4 over 43s)  kubelet          Node no-preload-225696 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     43s (x4 over 43s)  kubelet          Node no-preload-225696 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node no-preload-225696 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node no-preload-225696 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s                kubelet          Node no-preload-225696 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s                node-controller  Node no-preload-225696 event: Registered Node no-preload-225696 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-225696 status is now: NodeReady
	
	
	==> dmesg <==
	[ +33.922450] overlayfs: idmapped layers are currently not supported
	[Oct16 19:18] overlayfs: idmapped layers are currently not supported
	[Oct16 19:19] overlayfs: idmapped layers are currently not supported
	[Oct16 19:20] overlayfs: idmapped layers are currently not supported
	[Oct16 19:21] overlayfs: idmapped layers are currently not supported
	[Oct16 19:22] overlayfs: idmapped layers are currently not supported
	[  +5.025487] overlayfs: idmapped layers are currently not supported
	[Oct16 19:23] overlayfs: idmapped layers are currently not supported
	[ +28.397927] overlayfs: idmapped layers are currently not supported
	[Oct16 19:24] overlayfs: idmapped layers are currently not supported
	[ +25.533019] overlayfs: idmapped layers are currently not supported
	[Oct16 19:26] overlayfs: idmapped layers are currently not supported
	[Oct16 19:27] overlayfs: idmapped layers are currently not supported
	[Oct16 19:29] overlayfs: idmapped layers are currently not supported
	[Oct16 19:31] overlayfs: idmapped layers are currently not supported
	[Oct16 19:32] overlayfs: idmapped layers are currently not supported
	[Oct16 19:34] overlayfs: idmapped layers are currently not supported
	[Oct16 19:36] overlayfs: idmapped layers are currently not supported
	[Oct16 19:37] overlayfs: idmapped layers are currently not supported
	[  +8.490329] overlayfs: idmapped layers are currently not supported
	[Oct16 19:38] overlayfs: idmapped layers are currently not supported
	[Oct16 19:39] overlayfs: idmapped layers are currently not supported
	[Oct16 19:40] overlayfs: idmapped layers are currently not supported
	[Oct16 19:41] overlayfs: idmapped layers are currently not supported
	[ +20.605853] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5d844531f0b16fb13adae896cdd7ea81f44002263c889441f3c98789f52b7e01] <==
	{"level":"warn","ts":"2025-10-16T19:42:09.551653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:09.585866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:09.634279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:09.677487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:09.727681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:09.771794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:09.793728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:09.806492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:09.824540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:09.846901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:09.864482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:09.886795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:09.905937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:09.927695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:09.950652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:09.983640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:10.049008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:10.100983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:10.120593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:10.170262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:10.186439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:10.234132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:10.266341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:10.301236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:10.471033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36112","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:42:49 up  2:24,  0 user,  load average: 4.42, 3.76, 3.00
	Linux no-preload-225696 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [54e8c77f7647398e7e951bbe9ff3a209caf81bbf4146bd5f62134d76963a4cff] <==
	I1016 19:42:24.310249       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 19:42:24.310485       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1016 19:42:24.310605       1 main.go:148] setting mtu 1500 for CNI 
	I1016 19:42:24.310624       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 19:42:24.310634       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T19:42:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 19:42:24.608658       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 19:42:24.608727       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 19:42:24.608767       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 19:42:24.616570       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1016 19:42:24.809785       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 19:42:24.809872       1 metrics.go:72] Registering metrics
	I1016 19:42:24.809951       1 controller.go:711] "Syncing nftables rules"
	I1016 19:42:34.614870       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:42:34.614911       1 main.go:301] handling current node
	I1016 19:42:44.609229       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:42:44.609261       1 main.go:301] handling current node
	
	
	==> kube-apiserver [97a04f8d2c2edafa8e3cad8a8ba4301e4b91e30f95407998ac6b206e5a49c9f1] <==
	I1016 19:42:11.758314       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1016 19:42:11.775075       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 19:42:11.775309       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1016 19:42:12.284334       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1016 19:42:12.292711       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1016 19:42:12.292760       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 19:42:13.506764       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 19:42:13.577772       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 19:42:13.700745       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1016 19:42:13.709064       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1016 19:42:13.710225       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 19:42:13.716500       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 19:42:14.433258       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	{"level":"warn","ts":"2025-10-16T19:42:14.900915Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400161a000/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E1016 19:42:14.901540       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 37.973µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1016 19:42:14.901413       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="PATCH" URI="/api/v1/namespaces/default/events/no-preload-225696.186f102411aeff16" auditID="0d9a25c3-b4b6-4139-9eb0-b44d1d4ebd21"
	E1016 19:42:14.901667       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="236.679µs" method="PATCH" path="/api/v1/namespaces/default/events/no-preload-225696.186f102411aeff16" result=null
	I1016 19:42:14.994415       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 19:42:15.022504       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1016 19:42:15.043712       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1016 19:42:19.734512       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1016 19:42:20.245082       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 19:42:20.253096       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 19:42:20.384919       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1016 19:42:47.357079       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:35304: use of closed network connection
	
	
	==> kube-controller-manager [961017b09a002a75c42270e7f1a822e45e2e9cce9d0c99b4f5835513e72aa5bd] <==
	I1016 19:42:19.478051       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 19:42:19.478395       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1016 19:42:19.478664       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1016 19:42:19.478899       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1016 19:42:19.478913       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1016 19:42:19.478924       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 19:42:19.478933       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1016 19:42:19.481122       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1016 19:42:19.481541       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 19:42:19.481560       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1016 19:42:19.484223       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1016 19:42:19.484240       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 19:42:19.486824       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1016 19:42:19.487006       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1016 19:42:19.487099       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1016 19:42:19.487201       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1016 19:42:19.487234       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1016 19:42:19.487274       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1016 19:42:19.494345       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:42:19.494612       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1016 19:42:19.496592       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1016 19:42:19.498039       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-225696" podCIDRs=["10.244.0.0/24"]
	I1016 19:42:19.518416       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 19:42:19.521584       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:42:39.437053       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [adcad4db221d6627b9d226955f6d4a6cd2a4bb1f3a0c092b091fe597ada62964] <==
	I1016 19:42:21.148046       1 server_linux.go:53] "Using iptables proxy"
	I1016 19:42:21.228164       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 19:42:21.328497       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 19:42:21.328538       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1016 19:42:21.328623       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 19:42:21.376836       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 19:42:21.376892       1 server_linux.go:132] "Using iptables Proxier"
	I1016 19:42:21.395219       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 19:42:21.395649       1 server.go:527] "Version info" version="v1.34.1"
	I1016 19:42:21.395676       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:42:21.396982       1 config.go:200] "Starting service config controller"
	I1016 19:42:21.397009       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 19:42:21.398441       1 config.go:106] "Starting endpoint slice config controller"
	I1016 19:42:21.398459       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 19:42:21.398478       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 19:42:21.398482       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 19:42:21.398993       1 config.go:309] "Starting node config controller"
	I1016 19:42:21.399005       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 19:42:21.399010       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 19:42:21.497288       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 19:42:21.498508       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1016 19:42:21.498767       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3b29d9038bc16be12f309eec9175ee99bde3ebdf6d0b1eec094bef608c439221] <==
	E1016 19:42:11.626841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 19:42:11.626966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 19:42:11.627067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 19:42:11.627166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 19:42:11.627340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 19:42:11.627410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 19:42:11.627520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 19:42:11.627591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 19:42:11.627853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 19:42:12.476764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 19:42:12.490775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 19:42:12.512619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 19:42:12.542594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 19:42:12.553109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 19:42:12.644005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 19:42:12.695909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 19:42:12.702090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 19:42:12.762956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 19:42:12.771647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 19:42:12.804219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 19:42:12.846836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1016 19:42:12.897425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 19:42:12.969782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 19:42:12.969935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1016 19:42:15.961931       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 19:42:19 no-preload-225696 kubelet[2003]: I1016 19:42:19.839016    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/81b760ca-5d1a-415d-9ea3-e0595c050e9c-kube-proxy\") pod \"kube-proxy-m86rv\" (UID: \"81b760ca-5d1a-415d-9ea3-e0595c050e9c\") " pod="kube-system/kube-proxy-m86rv"
	Oct 16 19:42:19 no-preload-225696 kubelet[2003]: I1016 19:42:19.839051    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnlf5\" (UniqueName: \"kubernetes.io/projected/81b760ca-5d1a-415d-9ea3-e0595c050e9c-kube-api-access-lnlf5\") pod \"kube-proxy-m86rv\" (UID: \"81b760ca-5d1a-415d-9ea3-e0595c050e9c\") " pod="kube-system/kube-proxy-m86rv"
	Oct 16 19:42:19 no-preload-225696 kubelet[2003]: I1016 19:42:19.839073    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81b760ca-5d1a-415d-9ea3-e0595c050e9c-xtables-lock\") pod \"kube-proxy-m86rv\" (UID: \"81b760ca-5d1a-415d-9ea3-e0595c050e9c\") " pod="kube-system/kube-proxy-m86rv"
	Oct 16 19:42:19 no-preload-225696 kubelet[2003]: I1016 19:42:19.839090    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6016f902-4dbc-47f8-b054-38453bfd865d-cni-cfg\") pod \"kindnet-kfg52\" (UID: \"6016f902-4dbc-47f8-b054-38453bfd865d\") " pod="kube-system/kindnet-kfg52"
	Oct 16 19:42:19 no-preload-225696 kubelet[2003]: I1016 19:42:19.839112    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6016f902-4dbc-47f8-b054-38453bfd865d-lib-modules\") pod \"kindnet-kfg52\" (UID: \"6016f902-4dbc-47f8-b054-38453bfd865d\") " pod="kube-system/kindnet-kfg52"
	Oct 16 19:42:19 no-preload-225696 kubelet[2003]: E1016 19:42:19.951277    2003 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 16 19:42:19 no-preload-225696 kubelet[2003]: E1016 19:42:19.951477    2003 projected.go:196] Error preparing data for projected volume kube-api-access-cwsbc for pod kube-system/kindnet-kfg52: configmap "kube-root-ca.crt" not found
	Oct 16 19:42:19 no-preload-225696 kubelet[2003]: E1016 19:42:19.951651    2003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6016f902-4dbc-47f8-b054-38453bfd865d-kube-api-access-cwsbc podName:6016f902-4dbc-47f8-b054-38453bfd865d nodeName:}" failed. No retries permitted until 2025-10-16 19:42:20.451605279 +0000 UTC m=+5.491228309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cwsbc" (UniqueName: "kubernetes.io/projected/6016f902-4dbc-47f8-b054-38453bfd865d-kube-api-access-cwsbc") pod "kindnet-kfg52" (UID: "6016f902-4dbc-47f8-b054-38453bfd865d") : configmap "kube-root-ca.crt" not found
	Oct 16 19:42:19 no-preload-225696 kubelet[2003]: E1016 19:42:19.958825    2003 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 16 19:42:19 no-preload-225696 kubelet[2003]: E1016 19:42:19.958998    2003 projected.go:196] Error preparing data for projected volume kube-api-access-lnlf5 for pod kube-system/kube-proxy-m86rv: configmap "kube-root-ca.crt" not found
	Oct 16 19:42:19 no-preload-225696 kubelet[2003]: E1016 19:42:19.959075    2003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81b760ca-5d1a-415d-9ea3-e0595c050e9c-kube-api-access-lnlf5 podName:81b760ca-5d1a-415d-9ea3-e0595c050e9c nodeName:}" failed. No retries permitted until 2025-10-16 19:42:20.45905372 +0000 UTC m=+5.498676751 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lnlf5" (UniqueName: "kubernetes.io/projected/81b760ca-5d1a-415d-9ea3-e0595c050e9c-kube-api-access-lnlf5") pod "kube-proxy-m86rv" (UID: "81b760ca-5d1a-415d-9ea3-e0595c050e9c") : configmap "kube-root-ca.crt" not found
	Oct 16 19:42:20 no-preload-225696 kubelet[2003]: I1016 19:42:20.544894    2003 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 16 19:42:20 no-preload-225696 kubelet[2003]: W1016 19:42:20.744731    2003 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a/crio-18d1f6428aa2d1acdd89a1a85ee4bf2b1e0a10ebc9327d2887c211c2e2d40de8 WatchSource:0}: Error finding container 18d1f6428aa2d1acdd89a1a85ee4bf2b1e0a10ebc9327d2887c211c2e2d40de8: Status 404 returned error can't find the container with id 18d1f6428aa2d1acdd89a1a85ee4bf2b1e0a10ebc9327d2887c211c2e2d40de8
	Oct 16 19:42:20 no-preload-225696 kubelet[2003]: W1016 19:42:20.749845    2003 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a/crio-20c0ac7fce1caef4a064e295b2f354220234898686d7210f4c53242aeb3b39d4 WatchSource:0}: Error finding container 20c0ac7fce1caef4a064e295b2f354220234898686d7210f4c53242aeb3b39d4: Status 404 returned error can't find the container with id 20c0ac7fce1caef4a064e295b2f354220234898686d7210f4c53242aeb3b39d4
	Oct 16 19:42:21 no-preload-225696 kubelet[2003]: I1016 19:42:21.240941    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m86rv" podStartSLOduration=2.240922143 podStartE2EDuration="2.240922143s" podCreationTimestamp="2025-10-16 19:42:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:42:21.240771856 +0000 UTC m=+6.280394895" watchObservedRunningTime="2025-10-16 19:42:21.240922143 +0000 UTC m=+6.280545173"
	Oct 16 19:42:25 no-preload-225696 kubelet[2003]: I1016 19:42:25.408458    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kfg52" podStartSLOduration=3.022822819 podStartE2EDuration="6.408437577s" podCreationTimestamp="2025-10-16 19:42:19 +0000 UTC" firstStartedPulling="2025-10-16 19:42:20.786196378 +0000 UTC m=+5.825819409" lastFinishedPulling="2025-10-16 19:42:24.171811136 +0000 UTC m=+9.211434167" observedRunningTime="2025-10-16 19:42:24.251375503 +0000 UTC m=+9.290998534" watchObservedRunningTime="2025-10-16 19:42:25.408437577 +0000 UTC m=+10.448060616"
	Oct 16 19:42:34 no-preload-225696 kubelet[2003]: I1016 19:42:34.753321    2003 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 16 19:42:34 no-preload-225696 kubelet[2003]: I1016 19:42:34.857507    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c09261b9-4ebb-417f-8fae-3957ed09e35d-config-volume\") pod \"coredns-66bc5c9577-jr55z\" (UID: \"c09261b9-4ebb-417f-8fae-3957ed09e35d\") " pod="kube-system/coredns-66bc5c9577-jr55z"
	Oct 16 19:42:34 no-preload-225696 kubelet[2003]: I1016 19:42:34.857583    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx9zj\" (UniqueName: \"kubernetes.io/projected/c09261b9-4ebb-417f-8fae-3957ed09e35d-kube-api-access-qx9zj\") pod \"coredns-66bc5c9577-jr55z\" (UID: \"c09261b9-4ebb-417f-8fae-3957ed09e35d\") " pod="kube-system/coredns-66bc5c9577-jr55z"
	Oct 16 19:42:34 no-preload-225696 kubelet[2003]: I1016 19:42:34.857668    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bcf3f7a7-6bc7-461a-a496-7d4467675e67-tmp\") pod \"storage-provisioner\" (UID: \"bcf3f7a7-6bc7-461a-a496-7d4467675e67\") " pod="kube-system/storage-provisioner"
	Oct 16 19:42:34 no-preload-225696 kubelet[2003]: I1016 19:42:34.857715    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw9j5\" (UniqueName: \"kubernetes.io/projected/bcf3f7a7-6bc7-461a-a496-7d4467675e67-kube-api-access-lw9j5\") pod \"storage-provisioner\" (UID: \"bcf3f7a7-6bc7-461a-a496-7d4467675e67\") " pod="kube-system/storage-provisioner"
	Oct 16 19:42:35 no-preload-225696 kubelet[2003]: W1016 19:42:35.130889    2003 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a/crio-1ef55d29e8ace42f648772992c0d84240a8ae4b9e2daf438c06ea6fb5538135f WatchSource:0}: Error finding container 1ef55d29e8ace42f648772992c0d84240a8ae4b9e2daf438c06ea6fb5538135f: Status 404 returned error can't find the container with id 1ef55d29e8ace42f648772992c0d84240a8ae4b9e2daf438c06ea6fb5538135f
	Oct 16 19:42:35 no-preload-225696 kubelet[2003]: I1016 19:42:35.320694    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jr55z" podStartSLOduration=15.320619396 podStartE2EDuration="15.320619396s" podCreationTimestamp="2025-10-16 19:42:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:42:35.294723863 +0000 UTC m=+20.334346902" watchObservedRunningTime="2025-10-16 19:42:35.320619396 +0000 UTC m=+20.360242435"
	Oct 16 19:42:35 no-preload-225696 kubelet[2003]: I1016 19:42:35.321173    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.321127175 podStartE2EDuration="13.321127175s" podCreationTimestamp="2025-10-16 19:42:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:42:35.321046912 +0000 UTC m=+20.360669959" watchObservedRunningTime="2025-10-16 19:42:35.321127175 +0000 UTC m=+20.360750206"
	Oct 16 19:42:38 no-preload-225696 kubelet[2003]: I1016 19:42:38.278088    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvhkq\" (UniqueName: \"kubernetes.io/projected/5e3658a2-5c39-4bba-8665-1c1d32931f47-kube-api-access-dvhkq\") pod \"busybox\" (UID: \"5e3658a2-5c39-4bba-8665-1c1d32931f47\") " pod="default/busybox"
	
	
	==> storage-provisioner [3c45938d20c71ab399fe04e9e99b567b7d0d1a3d92d23b63dd2ff89fb63b3bcf] <==
	I1016 19:42:35.177480       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 19:42:35.194868       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 19:42:35.194951       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1016 19:42:35.198513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:42:35.209932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 19:42:35.210175       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 19:42:35.211172       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-225696_99bc44a6-307e-4818-aea4-6f74b57561ea!
	I1016 19:42:35.212155       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62f60c8b-5f75-4039-9f5b-c9731950c343", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-225696_99bc44a6-307e-4818-aea4-6f74b57561ea became leader
	W1016 19:42:35.226478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:42:35.237939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 19:42:35.312164       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-225696_99bc44a6-307e-4818-aea4-6f74b57561ea!
	W1016 19:42:37.240847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:42:37.245591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:42:39.248986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:42:39.253551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:42:41.256898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:42:41.261605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:42:43.264519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:42:43.269100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:42:45.274362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:42:45.290335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:42:47.300642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:42:47.308226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:42:49.311907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:42:49.316791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-225696 -n no-preload-225696
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-225696 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-751669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-751669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (315.284367ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:43:06Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-751669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-751669 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-751669 describe deploy/metrics-server -n kube-system: exit status 1 (99.990037ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-751669 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-751669
helpers_test.go:243: (dbg) docker inspect embed-certs-751669:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48",
	        "Created": "2025-10-16T19:41:31.536310146Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 476608,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T19:41:31.610828955Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48/hostname",
	        "HostsPath": "/var/lib/docker/containers/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48/hosts",
	        "LogPath": "/var/lib/docker/containers/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48-json.log",
	        "Name": "/embed-certs-751669",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-751669:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-751669",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48",
	                "LowerDir": "/var/lib/docker/overlay2/cf63f44205295f3d0a02e5980b8f083a596a8cc4d722a04ab4c6c7d58f7ca488-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf63f44205295f3d0a02e5980b8f083a596a8cc4d722a04ab4c6c7d58f7ca488/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf63f44205295f3d0a02e5980b8f083a596a8cc4d722a04ab4c6c7d58f7ca488/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf63f44205295f3d0a02e5980b8f083a596a8cc4d722a04ab4c6c7d58f7ca488/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-751669",
	                "Source": "/var/lib/docker/volumes/embed-certs-751669/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-751669",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-751669",
	                "name.minikube.sigs.k8s.io": "embed-certs-751669",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "609c407e76259704a67e00f66ce9437df55ed5c0fbd7ffe4b7369aaee2e6d8c1",
	            "SandboxKey": "/var/run/docker/netns/609c407e7625",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-751669": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:50:59:96:12:17",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "47eda41405f419208be3b296b694c6a50ba0a9ebb091dac0d31792e4b62c69d1",
	                    "EndpointID": "f465f1d39c7e290b59b50dab71b10a876f273aadc8b0204139ab7ea37153be2b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-751669",
	                        "6ce556d58dc2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-751669 -n embed-certs-751669
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-751669 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-751669 logs -n 25: (1.525284082s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ delete  │ -p cilium-078761                                                                                                                                                                                                                              │ cilium-078761            │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │ 16 Oct 25 19:37 UTC │
	│ start   │ -p cert-expiration-828182 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-828182   │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │ 16 Oct 25 19:38 UTC │
	│ delete  │ -p force-systemd-env-871877                                                                                                                                                                                                                   │ force-systemd-env-871877 │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │ 16 Oct 25 19:37 UTC │
	│ start   │ -p cert-options-853056 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-853056      │ jenkins │ v1.37.0 │ 16 Oct 25 19:37 UTC │ 16 Oct 25 19:38 UTC │
	│ ssh     │ cert-options-853056 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-853056      │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:38 UTC │
	│ ssh     │ -p cert-options-853056 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-853056      │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:38 UTC │
	│ delete  │ -p cert-options-853056                                                                                                                                                                                                                        │ cert-options-853056      │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:38 UTC │
	│ start   │ -p old-k8s-version-663330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:39 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-663330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:39 UTC │                     │
	│ stop    │ -p old-k8s-version-663330 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:39 UTC │ 16 Oct 25 19:40 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-663330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:40 UTC │ 16 Oct 25 19:40 UTC │
	│ start   │ -p old-k8s-version-663330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:40 UTC │ 16 Oct 25 19:40 UTC │
	│ start   │ -p cert-expiration-828182 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-828182   │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ image   │ old-k8s-version-663330 image list --format=json                                                                                                                                                                                               │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-663330 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │                     │
	│ delete  │ -p old-k8s-version-663330                                                                                                                                                                                                                     │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ delete  │ -p cert-expiration-828182                                                                                                                                                                                                                     │ cert-expiration-828182   │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-663330                                                                                                                                                                                                                     │ old-k8s-version-663330   │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-225696 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-225696        │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:42 UTC │
	│ start   │ -p embed-certs-751669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-751669       │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p no-preload-225696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-225696        │ jenkins │ v1.37.0 │ 16 Oct 25 19:42 UTC │                     │
	│ stop    │ -p no-preload-225696 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-225696        │ jenkins │ v1.37.0 │ 16 Oct 25 19:42 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable dashboard -p no-preload-225696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-225696        │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ start   │ -p no-preload-225696 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-225696        │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-751669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-751669       │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 19:43:02
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 19:43:02.325357  481534 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:43:02.325492  481534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:43:02.325505  481534 out.go:374] Setting ErrFile to fd 2...
	I1016 19:43:02.325510  481534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:43:02.325790  481534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:43:02.326267  481534 out.go:368] Setting JSON to false
	I1016 19:43:02.327243  481534 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8712,"bootTime":1760635071,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 19:43:02.327317  481534 start.go:141] virtualization:  
	I1016 19:43:02.330711  481534 out.go:179] * [no-preload-225696] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 19:43:02.334802  481534 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 19:43:02.334867  481534 notify.go:220] Checking for updates...
	I1016 19:43:02.341809  481534 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 19:43:02.344858  481534 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:43:02.347898  481534 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 19:43:02.350812  481534 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 19:43:02.353933  481534 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 19:43:02.357220  481534 config.go:182] Loaded profile config "no-preload-225696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:43:02.357861  481534 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 19:43:02.385337  481534 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 19:43:02.385518  481534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:43:02.458463  481534 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-16 19:43:02.448907955 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:43:02.458577  481534 docker.go:318] overlay module found
	I1016 19:43:02.461773  481534 out.go:179] * Using the docker driver based on existing profile
	I1016 19:43:02.464525  481534 start.go:305] selected driver: docker
	I1016 19:43:02.464544  481534 start.go:925] validating driver "docker" against &{Name:no-preload-225696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-225696 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:43:02.464639  481534 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 19:43:02.465396  481534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:43:02.521383  481534 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-16 19:43:02.511019754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:43:02.521754  481534 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:43:02.521792  481534 cni.go:84] Creating CNI manager for ""
	I1016 19:43:02.521859  481534 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:43:02.521908  481534 start.go:349] cluster config:
	{Name:no-preload-225696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-225696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:43:02.525046  481534 out.go:179] * Starting "no-preload-225696" primary control-plane node in "no-preload-225696" cluster
	I1016 19:43:02.527913  481534 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 19:43:02.530770  481534 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 19:43:02.533660  481534 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:43:02.533754  481534 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 19:43:02.533808  481534 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/config.json ...
	I1016 19:43:02.534152  481534 cache.go:107] acquiring lock: {Name:mk3ea886119ae7a72b6b52084b45051802ab0ea9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:43:02.534244  481534 cache.go:115] /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1016 19:43:02.534262  481534 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 120.65µs
	I1016 19:43:02.534270  481534 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1016 19:43:02.534283  481534 cache.go:107] acquiring lock: {Name:mk8c221bddb61f5aa0199dd2282a4ad08ccc25bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:43:02.534319  481534 cache.go:115] /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1016 19:43:02.534329  481534 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 47.426µs
	I1016 19:43:02.534336  481534 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1016 19:43:02.534351  481534 cache.go:107] acquiring lock: {Name:mk367e20aa0b8bf29a39851b785be9b06c288668 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:43:02.534384  481534 cache.go:115] /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1016 19:43:02.534394  481534 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 44.965µs
	I1016 19:43:02.534401  481534 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1016 19:43:02.534413  481534 cache.go:107] acquiring lock: {Name:mka12d578d5eda57d6881961b212fdd6e69554ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:43:02.534443  481534 cache.go:115] /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1016 19:43:02.534453  481534 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 40.977µs
	I1016 19:43:02.534459  481534 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1016 19:43:02.534468  481534 cache.go:107] acquiring lock: {Name:mk34bd0b12cb785557091613e4fe2fd6d0f1e410 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:43:02.534494  481534 cache.go:115] /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1016 19:43:02.534506  481534 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 37.998µs
	I1016 19:43:02.534513  481534 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1016 19:43:02.534524  481534 cache.go:107] acquiring lock: {Name:mkbec9c534b6fe06b493e40c9db30c9b5a3d919a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:43:02.534557  481534 cache.go:115] /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1016 19:43:02.534567  481534 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 44.661µs
	I1016 19:43:02.534574  481534 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1016 19:43:02.534582  481534 cache.go:107] acquiring lock: {Name:mk24b680dbbe977d6123ccd2a964f7039271ab29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:43:02.534613  481534 cache.go:115] /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1016 19:43:02.534629  481534 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 41.05µs
	I1016 19:43:02.534635  481534 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1016 19:43:02.534644  481534 cache.go:107] acquiring lock: {Name:mk2769dab36282779139fbddfb91398c718d1c7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:43:02.534675  481534 cache.go:115] /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1016 19:43:02.534684  481534 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 40.723µs
	I1016 19:43:02.534690  481534 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21738-288457/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1016 19:43:02.534696  481534 cache.go:87] Successfully saved all images to host disk.
	I1016 19:43:02.554773  481534 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 19:43:02.554794  481534 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 19:43:02.554815  481534 cache.go:232] Successfully downloaded all kic artifacts
	I1016 19:43:02.554841  481534 start.go:360] acquireMachinesLock for no-preload-225696: {Name:mke238b45341a0dea874e8b019380818501657de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:43:02.554904  481534 start.go:364] duration metric: took 48.542µs to acquireMachinesLock for "no-preload-225696"
	I1016 19:43:02.554925  481534 start.go:96] Skipping create...Using existing machine configuration
	I1016 19:43:02.554931  481534 fix.go:54] fixHost starting: 
	I1016 19:43:02.555208  481534 cli_runner.go:164] Run: docker container inspect no-preload-225696 --format={{.State.Status}}
	I1016 19:43:02.572380  481534 fix.go:112] recreateIfNeeded on no-preload-225696: state=Stopped err=<nil>
	W1016 19:43:02.572415  481534 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 16 19:42:53 embed-certs-751669 crio[839]: time="2025-10-16T19:42:53.963799877Z" level=info msg="Created container a098549fabfb95c216b1d9c90b583b7a5f766eda8d07573f6c0cd354bb6e2fcc: kube-system/coredns-66bc5c9577-2h6z6/coredns" id=a83004c9-0acd-48e4-8d0f-9957842e4d67 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:42:53 embed-certs-751669 crio[839]: time="2025-10-16T19:42:53.964655344Z" level=info msg="Starting container: a098549fabfb95c216b1d9c90b583b7a5f766eda8d07573f6c0cd354bb6e2fcc" id=b85902c0-31fb-4d53-9edd-f81ed969d2ff name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:42:53 embed-certs-751669 crio[839]: time="2025-10-16T19:42:53.966894218Z" level=info msg="Started container" PID=1748 containerID=a098549fabfb95c216b1d9c90b583b7a5f766eda8d07573f6c0cd354bb6e2fcc description=kube-system/coredns-66bc5c9577-2h6z6/coredns id=b85902c0-31fb-4d53-9edd-f81ed969d2ff name=/runtime.v1.RuntimeService/StartContainer sandboxID=da5d278dd55a5a2279eb16a220ae38a2196b535fb8798a9c2aaaefdbee04f5ab
	Oct 16 19:42:56 embed-certs-751669 crio[839]: time="2025-10-16T19:42:56.381551433Z" level=info msg="Running pod sandbox: default/busybox/POD" id=1b2653b3-3f0f-4ccf-b40b-35476565a0c5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:42:56 embed-certs-751669 crio[839]: time="2025-10-16T19:42:56.381623467Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:42:56 embed-certs-751669 crio[839]: time="2025-10-16T19:42:56.386691407Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:225b8ae0ac9d26c37f678d5b0735a926cc61d809a0aed4924405d0671c18c667 UID:9c91d438-a5f2-4b5c-9b0b-7c64de9a9e22 NetNS:/var/run/netns/8654210e-1f76-4cd6-8778-1db64072bcf9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000792d0}] Aliases:map[]}"
	Oct 16 19:42:56 embed-certs-751669 crio[839]: time="2025-10-16T19:42:56.386724417Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 16 19:42:56 embed-certs-751669 crio[839]: time="2025-10-16T19:42:56.3968377Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:225b8ae0ac9d26c37f678d5b0735a926cc61d809a0aed4924405d0671c18c667 UID:9c91d438-a5f2-4b5c-9b0b-7c64de9a9e22 NetNS:/var/run/netns/8654210e-1f76-4cd6-8778-1db64072bcf9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000792d0}] Aliases:map[]}"
	Oct 16 19:42:56 embed-certs-751669 crio[839]: time="2025-10-16T19:42:56.396979863Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 16 19:42:56 embed-certs-751669 crio[839]: time="2025-10-16T19:42:56.399755555Z" level=info msg="Ran pod sandbox 225b8ae0ac9d26c37f678d5b0735a926cc61d809a0aed4924405d0671c18c667 with infra container: default/busybox/POD" id=1b2653b3-3f0f-4ccf-b40b-35476565a0c5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:42:56 embed-certs-751669 crio[839]: time="2025-10-16T19:42:56.401020158Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8bcd44a6-0f80-4b75-b5d5-e502e38e4465 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:42:56 embed-certs-751669 crio[839]: time="2025-10-16T19:42:56.401265954Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=8bcd44a6-0f80-4b75-b5d5-e502e38e4465 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:42:56 embed-certs-751669 crio[839]: time="2025-10-16T19:42:56.401397729Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=8bcd44a6-0f80-4b75-b5d5-e502e38e4465 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:42:56 embed-certs-751669 crio[839]: time="2025-10-16T19:42:56.404877855Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5607bc8b-75e8-489b-befe-dbab597370e6 name=/runtime.v1.ImageService/PullImage
	Oct 16 19:42:56 embed-certs-751669 crio[839]: time="2025-10-16T19:42:56.409331192Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 16 19:42:58 embed-certs-751669 crio[839]: time="2025-10-16T19:42:58.434099016Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=5607bc8b-75e8-489b-befe-dbab597370e6 name=/runtime.v1.ImageService/PullImage
	Oct 16 19:42:58 embed-certs-751669 crio[839]: time="2025-10-16T19:42:58.434772319Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=312adcab-898a-4469-8e4a-72d52621b88e name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:42:58 embed-certs-751669 crio[839]: time="2025-10-16T19:42:58.437869835Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6617ea21-ae3d-4226-912e-b30775a0bc2e name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:42:58 embed-certs-751669 crio[839]: time="2025-10-16T19:42:58.443570192Z" level=info msg="Creating container: default/busybox/busybox" id=ef90aba3-040e-4ab0-9126-b2b51567b2dd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:42:58 embed-certs-751669 crio[839]: time="2025-10-16T19:42:58.444383657Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:42:58 embed-certs-751669 crio[839]: time="2025-10-16T19:42:58.449237162Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:42:58 embed-certs-751669 crio[839]: time="2025-10-16T19:42:58.449861973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:42:58 embed-certs-751669 crio[839]: time="2025-10-16T19:42:58.465192556Z" level=info msg="Created container 7236afe37164e46bbd89c1881ede8e51a9c7f953f70f2456f5aa257ced3224d8: default/busybox/busybox" id=ef90aba3-040e-4ab0-9126-b2b51567b2dd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:42:58 embed-certs-751669 crio[839]: time="2025-10-16T19:42:58.465838765Z" level=info msg="Starting container: 7236afe37164e46bbd89c1881ede8e51a9c7f953f70f2456f5aa257ced3224d8" id=4153cf4c-0261-4806-a2a1-a0948af0eb21 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:42:58 embed-certs-751669 crio[839]: time="2025-10-16T19:42:58.470034927Z" level=info msg="Started container" PID=1805 containerID=7236afe37164e46bbd89c1881ede8e51a9c7f953f70f2456f5aa257ced3224d8 description=default/busybox/busybox id=4153cf4c-0261-4806-a2a1-a0948af0eb21 name=/runtime.v1.RuntimeService/StartContainer sandboxID=225b8ae0ac9d26c37f678d5b0735a926cc61d809a0aed4924405d0671c18c667
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	7236afe37164e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago        Running             busybox                   0                   225b8ae0ac9d2       busybox                                      default
	a098549fabfb9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   da5d278dd55a5       coredns-66bc5c9577-2h6z6                     kube-system
	a83691e87d69e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   317c2b78bccb8       storage-provisioner                          kube-system
	8071dd08e0cd9       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   cb3c703299084       kube-proxy-lvmlh                             kube-system
	29192e030a8c5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   66ca497346486       kindnet-cjx87                                kube-system
	a673378b1d825       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   629b0d8fd8242       etcd-embed-certs-751669                      kube-system
	3ff936e71c450       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   6b979639f68e6       kube-apiserver-embed-certs-751669            kube-system
	b04724f9bc160       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   30fb03a1886af       kube-scheduler-embed-certs-751669            kube-system
	4663fa342a57c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   7cf2de124788e       kube-controller-manager-embed-certs-751669   kube-system
	
	
	==> coredns [a098549fabfb95c216b1d9c90b583b7a5f766eda8d07573f6c0cd354bb6e2fcc] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43729 - 12788 "HINFO IN 4381930351383876446.66416240139355131. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.022238377s
	
	
	==> describe nodes <==
	Name:               embed-certs-751669
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-751669
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=embed-certs-751669
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T19_42_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 19:42:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-751669
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:43:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:42:53 +0000   Thu, 16 Oct 2025 19:41:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:42:53 +0000   Thu, 16 Oct 2025 19:41:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:42:53 +0000   Thu, 16 Oct 2025 19:41:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 19:42:53 +0000   Thu, 16 Oct 2025 19:42:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-751669
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                e85b0b1d-7b19-4554-be69-b4ff58296a42
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-2h6z6                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-embed-certs-751669                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-cjx87                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-751669             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-embed-certs-751669    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-lvmlh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-751669             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   Starting                 73s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 73s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node embed-certs-751669 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node embed-certs-751669 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x8 over 73s)  kubelet          Node embed-certs-751669 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node embed-certs-751669 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node embed-certs-751669 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node embed-certs-751669 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-751669 event: Registered Node embed-certs-751669 in Controller
	  Normal   NodeReady                15s                kubelet          Node embed-certs-751669 status is now: NodeReady
	
	
	==> dmesg <==
	[ +33.922450] overlayfs: idmapped layers are currently not supported
	[Oct16 19:18] overlayfs: idmapped layers are currently not supported
	[Oct16 19:19] overlayfs: idmapped layers are currently not supported
	[Oct16 19:20] overlayfs: idmapped layers are currently not supported
	[Oct16 19:21] overlayfs: idmapped layers are currently not supported
	[Oct16 19:22] overlayfs: idmapped layers are currently not supported
	[  +5.025487] overlayfs: idmapped layers are currently not supported
	[Oct16 19:23] overlayfs: idmapped layers are currently not supported
	[ +28.397927] overlayfs: idmapped layers are currently not supported
	[Oct16 19:24] overlayfs: idmapped layers are currently not supported
	[ +25.533019] overlayfs: idmapped layers are currently not supported
	[Oct16 19:26] overlayfs: idmapped layers are currently not supported
	[Oct16 19:27] overlayfs: idmapped layers are currently not supported
	[Oct16 19:29] overlayfs: idmapped layers are currently not supported
	[Oct16 19:31] overlayfs: idmapped layers are currently not supported
	[Oct16 19:32] overlayfs: idmapped layers are currently not supported
	[Oct16 19:34] overlayfs: idmapped layers are currently not supported
	[Oct16 19:36] overlayfs: idmapped layers are currently not supported
	[Oct16 19:37] overlayfs: idmapped layers are currently not supported
	[  +8.490329] overlayfs: idmapped layers are currently not supported
	[Oct16 19:38] overlayfs: idmapped layers are currently not supported
	[Oct16 19:39] overlayfs: idmapped layers are currently not supported
	[Oct16 19:40] overlayfs: idmapped layers are currently not supported
	[Oct16 19:41] overlayfs: idmapped layers are currently not supported
	[ +20.605853] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a673378b1d8250b863984990ad3dac0dc4d63a1a0b110ca69aeaedcfda3233e5] <==
	{"level":"warn","ts":"2025-10-16T19:42:01.717794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:01.750198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:01.760468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:01.793444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:01.824664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:01.845414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:01.874868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:01.909441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:01.915157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:01.971845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:01.981688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:02.045529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:02.046215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:02.080895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:02.099667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:02.133810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:02.150042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:02.188553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:02.213867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:02.233683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:02.256498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:02.326414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:02.361517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:02.414835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:42:02.491244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34836","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:43:08 up  2:25,  0 user,  load average: 3.25, 3.53, 2.94
	Linux embed-certs-751669 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [29192e030a8c53f8b4ef80c7682092dd39048505547242ed69adff31b062734e] <==
	I1016 19:42:12.854561       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 19:42:12.857543       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1016 19:42:12.858573       1 main.go:148] setting mtu 1500 for CNI 
	I1016 19:42:12.859903       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 19:42:12.859979       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T19:42:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 19:42:13.143486       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 19:42:13.143516       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 19:42:13.143524       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 19:42:13.143769       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1016 19:42:43.139026       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1016 19:42:43.139029       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1016 19:42:43.143531       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1016 19:42:43.143579       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1016 19:42:44.243707       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 19:42:44.243743       1 metrics.go:72] Registering metrics
	I1016 19:42:44.243829       1 controller.go:711] "Syncing nftables rules"
	I1016 19:42:53.124461       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1016 19:42:53.124519       1 main.go:301] handling current node
	I1016 19:43:03.117340       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1016 19:43:03.117400       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3ff936e71c450e76ef26283bb92c812a0d8027cd4041b546050f507821a7eff3] <==
	I1016 19:42:04.172237       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 19:42:04.172280       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1016 19:42:04.203699       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 19:42:04.235332       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 19:42:04.235633       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1016 19:42:04.267106       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1016 19:42:04.386189       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 19:42:04.620233       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1016 19:42:04.627008       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1016 19:42:04.627046       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 19:42:05.690750       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 19:42:05.750979       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 19:42:05.826395       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1016 19:42:05.837793       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1016 19:42:05.839130       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 19:42:05.845115       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 19:42:06.816840       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 19:42:06.866838       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 19:42:06.922417       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1016 19:42:06.953938       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1016 19:42:12.141581       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1016 19:42:12.548009       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 19:42:12.904566       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 19:42:12.935743       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1016 19:43:06.247756       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:36646: use of closed network connection
	
	
	==> kube-controller-manager [4663fa342a57c7fd7bb2a79e9ac13dc5d99960aab2af186e6338762877f5d045] <==
	I1016 19:42:11.842645       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1016 19:42:11.843782       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1016 19:42:11.845440       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 19:42:11.846702       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 19:42:11.853222       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1016 19:42:11.853339       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1016 19:42:11.856744       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1016 19:42:11.867898       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:42:11.868879       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1016 19:42:11.881782       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1016 19:42:11.881984       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1016 19:42:11.882113       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-751669"
	I1016 19:42:11.882192       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1016 19:42:11.886967       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:42:11.887047       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 19:42:11.887080       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 19:42:11.887215       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1016 19:42:11.887276       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1016 19:42:11.888150       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1016 19:42:11.889331       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1016 19:42:11.889731       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1016 19:42:11.890073       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1016 19:42:11.890531       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1016 19:42:11.894873       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1016 19:42:56.889654       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8071dd08e0cd965cf7e362f8750eab0b5036686a6b293fba141fc89612b706a2] <==
	I1016 19:42:12.994218       1 server_linux.go:53] "Using iptables proxy"
	I1016 19:42:13.290924       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 19:42:13.391816       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 19:42:13.391892       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1016 19:42:13.392050       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 19:42:13.473258       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 19:42:13.477278       1 server_linux.go:132] "Using iptables Proxier"
	I1016 19:42:13.498517       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 19:42:13.498826       1 server.go:527] "Version info" version="v1.34.1"
	I1016 19:42:13.498842       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:42:13.508214       1 config.go:200] "Starting service config controller"
	I1016 19:42:13.525935       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 19:42:13.525957       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 19:42:13.511745       1 config.go:309] "Starting node config controller"
	I1016 19:42:13.525968       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 19:42:13.525978       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 19:42:13.510168       1 config.go:106] "Starting endpoint slice config controller"
	I1016 19:42:13.525986       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 19:42:13.525990       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1016 19:42:13.510181       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 19:42:13.526029       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 19:42:13.526033       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b04724f9bc160a8c78a0e26e4134dd7623a7de5e063cec53cc9cb1df24ee7240] <==
	I1016 19:42:04.373664       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:42:04.373692       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:42:04.373714       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1016 19:42:04.410779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 19:42:04.420991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 19:42:04.422279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1016 19:42:04.422549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 19:42:04.422597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 19:42:04.422704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 19:42:04.422740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 19:42:04.422774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 19:42:04.422938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 19:42:04.422974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 19:42:04.423005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 19:42:04.423036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 19:42:04.423070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 19:42:04.423753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 19:42:04.423930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 19:42:04.424025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 19:42:04.424133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 19:42:04.424329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 19:42:04.424357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 19:42:05.301158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 19:42:05.400053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1016 19:42:07.975151       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 19:42:08 embed-certs-751669 kubelet[1315]: I1016 19:42:08.317817    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-751669" podStartSLOduration=1.317798131 podStartE2EDuration="1.317798131s" podCreationTimestamp="2025-10-16 19:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:42:08.317562994 +0000 UTC m=+1.657128266" watchObservedRunningTime="2025-10-16 19:42:08.317798131 +0000 UTC m=+1.657363395"
	Oct 16 19:42:11 embed-certs-751669 kubelet[1315]: I1016 19:42:11.883300    1315 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 16 19:42:11 embed-certs-751669 kubelet[1315]: I1016 19:42:11.884101    1315 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 16 19:42:12 embed-certs-751669 kubelet[1315]: E1016 19:42:12.173663    1315 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-lvmlh\" is forbidden: User \"system:node:embed-certs-751669\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-751669' and this object" podUID="6b56d13a-ca45-4f0d-92df-db96025be2e4" pod="kube-system/kube-proxy-lvmlh"
	Oct 16 19:42:12 embed-certs-751669 kubelet[1315]: I1016 19:42:12.233436    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95baa320-d051-4ea0-907e-d603971eb05a-xtables-lock\") pod \"kindnet-cjx87\" (UID: \"95baa320-d051-4ea0-907e-d603971eb05a\") " pod="kube-system/kindnet-cjx87"
	Oct 16 19:42:12 embed-certs-751669 kubelet[1315]: I1016 19:42:12.233490    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/95baa320-d051-4ea0-907e-d603971eb05a-cni-cfg\") pod \"kindnet-cjx87\" (UID: \"95baa320-d051-4ea0-907e-d603971eb05a\") " pod="kube-system/kindnet-cjx87"
	Oct 16 19:42:12 embed-certs-751669 kubelet[1315]: I1016 19:42:12.233513    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b56d13a-ca45-4f0d-92df-db96025be2e4-lib-modules\") pod \"kube-proxy-lvmlh\" (UID: \"6b56d13a-ca45-4f0d-92df-db96025be2e4\") " pod="kube-system/kube-proxy-lvmlh"
	Oct 16 19:42:12 embed-certs-751669 kubelet[1315]: I1016 19:42:12.233533    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27jc9\" (UniqueName: \"kubernetes.io/projected/6b56d13a-ca45-4f0d-92df-db96025be2e4-kube-api-access-27jc9\") pod \"kube-proxy-lvmlh\" (UID: \"6b56d13a-ca45-4f0d-92df-db96025be2e4\") " pod="kube-system/kube-proxy-lvmlh"
	Oct 16 19:42:12 embed-certs-751669 kubelet[1315]: I1016 19:42:12.233556    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9js2\" (UniqueName: \"kubernetes.io/projected/95baa320-d051-4ea0-907e-d603971eb05a-kube-api-access-t9js2\") pod \"kindnet-cjx87\" (UID: \"95baa320-d051-4ea0-907e-d603971eb05a\") " pod="kube-system/kindnet-cjx87"
	Oct 16 19:42:12 embed-certs-751669 kubelet[1315]: I1016 19:42:12.233572    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6b56d13a-ca45-4f0d-92df-db96025be2e4-kube-proxy\") pod \"kube-proxy-lvmlh\" (UID: \"6b56d13a-ca45-4f0d-92df-db96025be2e4\") " pod="kube-system/kube-proxy-lvmlh"
	Oct 16 19:42:12 embed-certs-751669 kubelet[1315]: I1016 19:42:12.233588    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b56d13a-ca45-4f0d-92df-db96025be2e4-xtables-lock\") pod \"kube-proxy-lvmlh\" (UID: \"6b56d13a-ca45-4f0d-92df-db96025be2e4\") " pod="kube-system/kube-proxy-lvmlh"
	Oct 16 19:42:12 embed-certs-751669 kubelet[1315]: I1016 19:42:12.233624    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95baa320-d051-4ea0-907e-d603971eb05a-lib-modules\") pod \"kindnet-cjx87\" (UID: \"95baa320-d051-4ea0-907e-d603971eb05a\") " pod="kube-system/kindnet-cjx87"
	Oct 16 19:42:12 embed-certs-751669 kubelet[1315]: I1016 19:42:12.347761    1315 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 16 19:42:12 embed-certs-751669 kubelet[1315]: W1016 19:42:12.558779    1315 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48/crio-66ca4973464862510ea517f3111ce53ce11679450f0d7ec33c36ae3cd99fdfd5 WatchSource:0}: Error finding container 66ca4973464862510ea517f3111ce53ce11679450f0d7ec33c36ae3cd99fdfd5: Status 404 returned error can't find the container with id 66ca4973464862510ea517f3111ce53ce11679450f0d7ec33c36ae3cd99fdfd5
	Oct 16 19:42:13 embed-certs-751669 kubelet[1315]: I1016 19:42:13.077616    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cjx87" podStartSLOduration=1.077583582 podStartE2EDuration="1.077583582s" podCreationTimestamp="2025-10-16 19:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:42:13.01388377 +0000 UTC m=+6.353449051" watchObservedRunningTime="2025-10-16 19:42:13.077583582 +0000 UTC m=+6.417148870"
	Oct 16 19:42:13 embed-certs-751669 kubelet[1315]: I1016 19:42:13.803715    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lvmlh" podStartSLOduration=1.8036976789999999 podStartE2EDuration="1.803697679s" podCreationTimestamp="2025-10-16 19:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:42:13.148901304 +0000 UTC m=+6.488466601" watchObservedRunningTime="2025-10-16 19:42:13.803697679 +0000 UTC m=+7.143262951"
	Oct 16 19:42:53 embed-certs-751669 kubelet[1315]: I1016 19:42:53.505477    1315 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 16 19:42:53 embed-certs-751669 kubelet[1315]: I1016 19:42:53.665857    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af34943c-9e1b-4fae-a8b8-815874618d70-config-volume\") pod \"coredns-66bc5c9577-2h6z6\" (UID: \"af34943c-9e1b-4fae-a8b8-815874618d70\") " pod="kube-system/coredns-66bc5c9577-2h6z6"
	Oct 16 19:42:53 embed-certs-751669 kubelet[1315]: I1016 19:42:53.666078    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm6zr\" (UniqueName: \"kubernetes.io/projected/139c88ca-0616-415b-91d4-03e93ae02f70-kube-api-access-lm6zr\") pod \"storage-provisioner\" (UID: \"139c88ca-0616-415b-91d4-03e93ae02f70\") " pod="kube-system/storage-provisioner"
	Oct 16 19:42:53 embed-certs-751669 kubelet[1315]: I1016 19:42:53.666124    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlxq2\" (UniqueName: \"kubernetes.io/projected/af34943c-9e1b-4fae-a8b8-815874618d70-kube-api-access-tlxq2\") pod \"coredns-66bc5c9577-2h6z6\" (UID: \"af34943c-9e1b-4fae-a8b8-815874618d70\") " pod="kube-system/coredns-66bc5c9577-2h6z6"
	Oct 16 19:42:53 embed-certs-751669 kubelet[1315]: I1016 19:42:53.666146    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/139c88ca-0616-415b-91d4-03e93ae02f70-tmp\") pod \"storage-provisioner\" (UID: \"139c88ca-0616-415b-91d4-03e93ae02f70\") " pod="kube-system/storage-provisioner"
	Oct 16 19:42:53 embed-certs-751669 kubelet[1315]: W1016 19:42:53.881790    1315 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48/crio-317c2b78bccb89fe0f3686c68625a7a2998437ffa14a115e3586711d52b8c635 WatchSource:0}: Error finding container 317c2b78bccb89fe0f3686c68625a7a2998437ffa14a115e3586711d52b8c635: Status 404 returned error can't find the container with id 317c2b78bccb89fe0f3686c68625a7a2998437ffa14a115e3586711d52b8c635
	Oct 16 19:42:54 embed-certs-751669 kubelet[1315]: I1016 19:42:54.104585    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.10456348 podStartE2EDuration="40.10456348s" podCreationTimestamp="2025-10-16 19:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:42:54.089631769 +0000 UTC m=+47.429197033" watchObservedRunningTime="2025-10-16 19:42:54.10456348 +0000 UTC m=+47.444128785"
	Oct 16 19:42:56 embed-certs-751669 kubelet[1315]: I1016 19:42:56.066118    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2h6z6" podStartSLOduration=43.066097833 podStartE2EDuration="43.066097833s" podCreationTimestamp="2025-10-16 19:42:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:42:54.105682211 +0000 UTC m=+47.445247491" watchObservedRunningTime="2025-10-16 19:42:56.066097833 +0000 UTC m=+49.405663113"
	Oct 16 19:42:56 embed-certs-751669 kubelet[1315]: I1016 19:42:56.182269    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvrdc\" (UniqueName: \"kubernetes.io/projected/9c91d438-a5f2-4b5c-9b0b-7c64de9a9e22-kube-api-access-rvrdc\") pod \"busybox\" (UID: \"9c91d438-a5f2-4b5c-9b0b-7c64de9a9e22\") " pod="default/busybox"
	
	
	==> storage-provisioner [a83691e87d69ec18adf904eecf568b7b8f7b625991c7359d30481462f3ef7913] <==
	I1016 19:42:54.012129       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 19:42:54.032617       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 19:42:54.038458       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1016 19:42:54.041126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:42:54.049624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 19:42:54.049796       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 19:42:54.049877       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3b3bf56d-d1bb-49d9-8a23-b33cfd29d57a", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-751669_1640300d-88b6-4231-af79-a8e9411a7252 became leader
	I1016 19:42:54.050225       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-751669_1640300d-88b6-4231-af79-a8e9411a7252!
	W1016 19:42:54.054058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:42:54.061309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 19:42:54.151387       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-751669_1640300d-88b6-4231-af79-a8e9411a7252!
	W1016 19:42:56.081290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:42:56.096133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:42:58.100769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:42:58.105513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:43:00.114999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:43:00.136470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:43:02.140053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:43:02.145377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:43:04.149213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:43:04.156471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:43:06.174958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:43:06.202628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:43:08.206294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:43:08.218853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-751669 -n embed-certs-751669
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-751669 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-225696 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-225696 --alsologtostderr -v=1: exit status 80 (1.881451749s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-225696 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 19:44:05.249951  486537 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:44:05.250154  486537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:44:05.250183  486537 out.go:374] Setting ErrFile to fd 2...
	I1016 19:44:05.250190  486537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:44:05.251094  486537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:44:05.251499  486537 out.go:368] Setting JSON to false
	I1016 19:44:05.251522  486537 mustload.go:65] Loading cluster: no-preload-225696
	I1016 19:44:05.252091  486537 config.go:182] Loaded profile config "no-preload-225696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:44:05.252805  486537 cli_runner.go:164] Run: docker container inspect no-preload-225696 --format={{.State.Status}}
	I1016 19:44:05.276780  486537 host.go:66] Checking if "no-preload-225696" exists ...
	I1016 19:44:05.277486  486537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:44:05.334322  486537 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-16 19:44:05.324435142 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:44:05.334973  486537 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-225696 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1016 19:44:05.338359  486537 out.go:179] * Pausing node no-preload-225696 ... 
	I1016 19:44:05.342129  486537 host.go:66] Checking if "no-preload-225696" exists ...
	I1016 19:44:05.342485  486537 ssh_runner.go:195] Run: systemctl --version
	I1016 19:44:05.342613  486537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-225696
	I1016 19:44:05.361567  486537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/no-preload-225696/id_rsa Username:docker}
	I1016 19:44:05.464098  486537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:44:05.489944  486537 pause.go:52] kubelet running: true
	I1016 19:44:05.490012  486537 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 19:44:05.760349  486537 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 19:44:05.760483  486537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 19:44:05.829527  486537 cri.go:89] found id: "61e433028e9f5d9205876d98bffe8dc107dca16c19f9fc0816fd23296b3d01cd"
	I1016 19:44:05.829593  486537 cri.go:89] found id: "36e68cde0cbee113287402c3971f42685f0998bc56e6d7f67c52fb9aeb37e79f"
	I1016 19:44:05.829613  486537 cri.go:89] found id: "3692cc5de998b90aae84a96921c2274a4037e62497227812a010c277bf893a25"
	I1016 19:44:05.829630  486537 cri.go:89] found id: "e3b885bb4fb971bce2efdf7f5ef86bd41c06a2df486460d3723e0cafcf13050c"
	I1016 19:44:05.829640  486537 cri.go:89] found id: "41d9ccf1929d9d999832642ca90ea604512d03a91d987faa66ae896de2f7d34f"
	I1016 19:44:05.829645  486537 cri.go:89] found id: "7300b15e4085a66cb68787117e92bb710eb0d1215ec993db5fb84c3d949130d8"
	I1016 19:44:05.829648  486537 cri.go:89] found id: "54c3315a98e54e9dea40491fb54e4522a7a4b2f2741c1db37a3baf94aa4ca7fe"
	I1016 19:44:05.829651  486537 cri.go:89] found id: "3ba8ff04c879c0b8622800d55c14e4e53ce7edc4fc8527ba00de12d8cf1436a8"
	I1016 19:44:05.829654  486537 cri.go:89] found id: "948a539396c168da2900996f537d4295485126181c9390e8ecf95665342f725d"
	I1016 19:44:05.829674  486537 cri.go:89] found id: "1f991b7f7f42165c9ce22614ac2f32519f7d9551f623c3c068b920302279e3d0"
	I1016 19:44:05.829681  486537 cri.go:89] found id: "825fe7e210b26805cdb54da81644fbf342aa5e2833a84251a10b17d560a4d1fd"
	I1016 19:44:05.829687  486537 cri.go:89] found id: ""
	I1016 19:44:05.829755  486537 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 19:44:05.843439  486537 retry.go:31] will retry after 317.390587ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:44:05Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:44:06.162001  486537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:44:06.177457  486537 pause.go:52] kubelet running: false
	I1016 19:44:06.177552  486537 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 19:44:06.389331  486537 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 19:44:06.389422  486537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 19:44:06.473060  486537 cri.go:89] found id: "61e433028e9f5d9205876d98bffe8dc107dca16c19f9fc0816fd23296b3d01cd"
	I1016 19:44:06.473084  486537 cri.go:89] found id: "36e68cde0cbee113287402c3971f42685f0998bc56e6d7f67c52fb9aeb37e79f"
	I1016 19:44:06.473090  486537 cri.go:89] found id: "3692cc5de998b90aae84a96921c2274a4037e62497227812a010c277bf893a25"
	I1016 19:44:06.473094  486537 cri.go:89] found id: "e3b885bb4fb971bce2efdf7f5ef86bd41c06a2df486460d3723e0cafcf13050c"
	I1016 19:44:06.473097  486537 cri.go:89] found id: "41d9ccf1929d9d999832642ca90ea604512d03a91d987faa66ae896de2f7d34f"
	I1016 19:44:06.473101  486537 cri.go:89] found id: "7300b15e4085a66cb68787117e92bb710eb0d1215ec993db5fb84c3d949130d8"
	I1016 19:44:06.473104  486537 cri.go:89] found id: "54c3315a98e54e9dea40491fb54e4522a7a4b2f2741c1db37a3baf94aa4ca7fe"
	I1016 19:44:06.473107  486537 cri.go:89] found id: "3ba8ff04c879c0b8622800d55c14e4e53ce7edc4fc8527ba00de12d8cf1436a8"
	I1016 19:44:06.473111  486537 cri.go:89] found id: "948a539396c168da2900996f537d4295485126181c9390e8ecf95665342f725d"
	I1016 19:44:06.473117  486537 cri.go:89] found id: "1f991b7f7f42165c9ce22614ac2f32519f7d9551f623c3c068b920302279e3d0"
	I1016 19:44:06.473121  486537 cri.go:89] found id: "825fe7e210b26805cdb54da81644fbf342aa5e2833a84251a10b17d560a4d1fd"
	I1016 19:44:06.473125  486537 cri.go:89] found id: ""
	I1016 19:44:06.473200  486537 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 19:44:06.484772  486537 retry.go:31] will retry after 289.178985ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:44:06Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:44:06.774271  486537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:44:06.786874  486537 pause.go:52] kubelet running: false
	I1016 19:44:06.786950  486537 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 19:44:06.965657  486537 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 19:44:06.965741  486537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 19:44:07.043849  486537 cri.go:89] found id: "61e433028e9f5d9205876d98bffe8dc107dca16c19f9fc0816fd23296b3d01cd"
	I1016 19:44:07.043941  486537 cri.go:89] found id: "36e68cde0cbee113287402c3971f42685f0998bc56e6d7f67c52fb9aeb37e79f"
	I1016 19:44:07.043962  486537 cri.go:89] found id: "3692cc5de998b90aae84a96921c2274a4037e62497227812a010c277bf893a25"
	I1016 19:44:07.043980  486537 cri.go:89] found id: "e3b885bb4fb971bce2efdf7f5ef86bd41c06a2df486460d3723e0cafcf13050c"
	I1016 19:44:07.044017  486537 cri.go:89] found id: "41d9ccf1929d9d999832642ca90ea604512d03a91d987faa66ae896de2f7d34f"
	I1016 19:44:07.044036  486537 cri.go:89] found id: "7300b15e4085a66cb68787117e92bb710eb0d1215ec993db5fb84c3d949130d8"
	I1016 19:44:07.044055  486537 cri.go:89] found id: "54c3315a98e54e9dea40491fb54e4522a7a4b2f2741c1db37a3baf94aa4ca7fe"
	I1016 19:44:07.044087  486537 cri.go:89] found id: "3ba8ff04c879c0b8622800d55c14e4e53ce7edc4fc8527ba00de12d8cf1436a8"
	I1016 19:44:07.044111  486537 cri.go:89] found id: "948a539396c168da2900996f537d4295485126181c9390e8ecf95665342f725d"
	I1016 19:44:07.044135  486537 cri.go:89] found id: "1f991b7f7f42165c9ce22614ac2f32519f7d9551f623c3c068b920302279e3d0"
	I1016 19:44:07.044169  486537 cri.go:89] found id: "825fe7e210b26805cdb54da81644fbf342aa5e2833a84251a10b17d560a4d1fd"
	I1016 19:44:07.044192  486537 cri.go:89] found id: ""
	I1016 19:44:07.044288  486537 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 19:44:07.059137  486537 out.go:203] 
	W1016 19:44:07.062023  486537 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:44:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:44:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 19:44:07.062048  486537 out.go:285] * 
	* 
	W1016 19:44:07.069545  486537 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 19:44:07.073019  486537 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-225696 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-225696
helpers_test.go:243: (dbg) docker inspect no-preload-225696:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a",
	        "Created": "2025-10-16T19:41:24.445990771Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 481663,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T19:43:02.608335315Z",
	            "FinishedAt": "2025-10-16T19:43:01.716326434Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a/hostname",
	        "HostsPath": "/var/lib/docker/containers/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a/hosts",
	        "LogPath": "/var/lib/docker/containers/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a-json.log",
	        "Name": "/no-preload-225696",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-225696:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-225696",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a",
	                "LowerDir": "/var/lib/docker/overlay2/07a6d3c2127f7badb81b1849c80b08dc8506200efbd30f222dfd4c5a220091b0-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/07a6d3c2127f7badb81b1849c80b08dc8506200efbd30f222dfd4c5a220091b0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/07a6d3c2127f7badb81b1849c80b08dc8506200efbd30f222dfd4c5a220091b0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/07a6d3c2127f7badb81b1849c80b08dc8506200efbd30f222dfd4c5a220091b0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-225696",
	                "Source": "/var/lib/docker/volumes/no-preload-225696/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-225696",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-225696",
	                "name.minikube.sigs.k8s.io": "no-preload-225696",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14cc910467f046f6a0ecabc7f40eb7360c7d6ed0b7a5d7970c1f96646ec908e7",
	            "SandboxKey": "/var/run/docker/netns/14cc910467f0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-225696": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:05:c3:30:72:a6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "39b67ad0eeb0bd39715bf4033d345a54f5da2b5672e2db285dbc6c4fed23f45e",
	                    "EndpointID": "ce69aef8604ca352d71031cf8a96d77a6a70307a1027b3d5c93f9467e9b57759",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-225696",
	                        "67fd0d064b81"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-225696 -n no-preload-225696
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-225696 -n no-preload-225696: exit status 2 (350.508679ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-225696 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-225696 logs -n 25: (1.318419615s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-853056 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-853056    │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:38 UTC │
	│ delete  │ -p cert-options-853056                                                                                                                                                                                                                        │ cert-options-853056    │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:38 UTC │
	│ start   │ -p old-k8s-version-663330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-663330 │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:39 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-663330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-663330 │ jenkins │ v1.37.0 │ 16 Oct 25 19:39 UTC │                     │
	│ stop    │ -p old-k8s-version-663330 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-663330 │ jenkins │ v1.37.0 │ 16 Oct 25 19:39 UTC │ 16 Oct 25 19:40 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-663330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-663330 │ jenkins │ v1.37.0 │ 16 Oct 25 19:40 UTC │ 16 Oct 25 19:40 UTC │
	│ start   │ -p old-k8s-version-663330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-663330 │ jenkins │ v1.37.0 │ 16 Oct 25 19:40 UTC │ 16 Oct 25 19:40 UTC │
	│ start   │ -p cert-expiration-828182 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-828182 │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ image   │ old-k8s-version-663330 image list --format=json                                                                                                                                                                                               │ old-k8s-version-663330 │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-663330 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-663330 │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │                     │
	│ delete  │ -p old-k8s-version-663330                                                                                                                                                                                                                     │ old-k8s-version-663330 │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ delete  │ -p cert-expiration-828182                                                                                                                                                                                                                     │ cert-expiration-828182 │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-663330                                                                                                                                                                                                                     │ old-k8s-version-663330 │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-225696 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-225696      │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:42 UTC │
	│ start   │ -p embed-certs-751669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-751669     │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p no-preload-225696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-225696      │ jenkins │ v1.37.0 │ 16 Oct 25 19:42 UTC │                     │
	│ stop    │ -p no-preload-225696 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-225696      │ jenkins │ v1.37.0 │ 16 Oct 25 19:42 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable dashboard -p no-preload-225696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-225696      │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ start   │ -p no-preload-225696 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-225696      │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-751669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-751669     │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │                     │
	│ stop    │ -p embed-certs-751669 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-751669     │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-751669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-751669     │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ start   │ -p embed-certs-751669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-751669     │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │                     │
	│ image   │ no-preload-225696 image list --format=json                                                                                                                                                                                                    │ no-preload-225696      │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ pause   │ -p no-preload-225696 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-225696      │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 19:43:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 19:43:21.948837  484119 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:43:21.948936  484119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:43:21.948941  484119 out.go:374] Setting ErrFile to fd 2...
	I1016 19:43:21.948946  484119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:43:21.949268  484119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:43:21.949681  484119 out.go:368] Setting JSON to false
	I1016 19:43:21.951749  484119 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8731,"bootTime":1760635071,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 19:43:21.951824  484119 start.go:141] virtualization:  
	I1016 19:43:21.954613  484119 out.go:179] * [embed-certs-751669] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 19:43:21.958657  484119 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 19:43:21.958784  484119 notify.go:220] Checking for updates...
	I1016 19:43:21.965654  484119 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 19:43:21.968620  484119 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:43:21.971548  484119 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 19:43:21.974453  484119 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 19:43:21.977605  484119 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 19:43:21.980972  484119 config.go:182] Loaded profile config "embed-certs-751669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:43:21.981597  484119 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 19:43:22.022084  484119 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 19:43:22.022221  484119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:43:22.143466  484119 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-16 19:43:22.130430664 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:43:22.143571  484119 docker.go:318] overlay module found
	I1016 19:43:22.147312  484119 out.go:179] * Using the docker driver based on existing profile
	I1016 19:43:22.150363  484119 start.go:305] selected driver: docker
	I1016 19:43:22.150384  484119 start.go:925] validating driver "docker" against &{Name:embed-certs-751669 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-751669 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:43:22.150493  484119 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 19:43:22.151220  484119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:43:22.233488  484119 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-16 19:43:22.224234315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:43:22.233848  484119 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:43:22.233886  484119 cni.go:84] Creating CNI manager for ""
	I1016 19:43:22.233952  484119 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:43:22.234000  484119 start.go:349] cluster config:
	{Name:embed-certs-751669 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-751669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:43:22.237395  484119 out.go:179] * Starting "embed-certs-751669" primary control-plane node in "embed-certs-751669" cluster
	I1016 19:43:22.241180  484119 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 19:43:22.244200  484119 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 19:43:22.247199  484119 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:43:22.247276  484119 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 19:43:22.247287  484119 cache.go:58] Caching tarball of preloaded images
	I1016 19:43:22.247383  484119 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 19:43:22.247393  484119 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 19:43:22.247513  484119 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/config.json ...
	I1016 19:43:22.247756  484119 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 19:43:22.268212  484119 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 19:43:22.268230  484119 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 19:43:22.268243  484119 cache.go:232] Successfully downloaded all kic artifacts
	I1016 19:43:22.268265  484119 start.go:360] acquireMachinesLock for embed-certs-751669: {Name:mkb92787bce004fe7aa2e02dbed85cdecf06ce4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:43:22.268318  484119 start.go:364] duration metric: took 35.972µs to acquireMachinesLock for "embed-certs-751669"
	I1016 19:43:22.268337  484119 start.go:96] Skipping create...Using existing machine configuration
	I1016 19:43:22.268342  484119 fix.go:54] fixHost starting: 
	I1016 19:43:22.268601  484119 cli_runner.go:164] Run: docker container inspect embed-certs-751669 --format={{.State.Status}}
	I1016 19:43:22.299548  484119 fix.go:112] recreateIfNeeded on embed-certs-751669: state=Stopped err=<nil>
	W1016 19:43:22.299629  484119 fix.go:138] unexpected machine state, will restart: <nil>
	W1016 19:43:18.984831  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	W1016 19:43:21.486847  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	I1016 19:43:22.302957  484119 out.go:252] * Restarting existing docker container for "embed-certs-751669" ...
	I1016 19:43:22.303121  484119 cli_runner.go:164] Run: docker start embed-certs-751669
	I1016 19:43:22.634711  484119 cli_runner.go:164] Run: docker container inspect embed-certs-751669 --format={{.State.Status}}
	I1016 19:43:22.663278  484119 kic.go:430] container "embed-certs-751669" state is running.
	I1016 19:43:22.663673  484119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-751669
	I1016 19:43:22.702623  484119 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/config.json ...
	I1016 19:43:22.702837  484119 machine.go:93] provisionDockerMachine start ...
	I1016 19:43:22.702893  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:22.738396  484119 main.go:141] libmachine: Using SSH client type: native
	I1016 19:43:22.739164  484119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1016 19:43:22.739185  484119 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 19:43:22.739861  484119 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 19:43:25.908933  484119 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-751669
	
	I1016 19:43:25.908994  484119 ubuntu.go:182] provisioning hostname "embed-certs-751669"
	I1016 19:43:25.909077  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:25.936459  484119 main.go:141] libmachine: Using SSH client type: native
	I1016 19:43:25.936766  484119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1016 19:43:25.936784  484119 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-751669 && echo "embed-certs-751669" | sudo tee /etc/hostname
	I1016 19:43:26.135873  484119 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-751669
	
	I1016 19:43:26.135996  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:26.178885  484119 main.go:141] libmachine: Using SSH client type: native
	I1016 19:43:26.179216  484119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1016 19:43:26.179236  484119 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-751669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-751669/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-751669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 19:43:26.357188  484119 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 19:43:26.357274  484119 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 19:43:26.357323  484119 ubuntu.go:190] setting up certificates
	I1016 19:43:26.357379  484119 provision.go:84] configureAuth start
	I1016 19:43:26.357464  484119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-751669
	I1016 19:43:26.377614  484119 provision.go:143] copyHostCerts
	I1016 19:43:26.377676  484119 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 19:43:26.377693  484119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 19:43:26.377762  484119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 19:43:26.377858  484119 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 19:43:26.377863  484119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 19:43:26.377888  484119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 19:43:26.377948  484119 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 19:43:26.377953  484119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 19:43:26.377975  484119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 19:43:26.378026  484119 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.embed-certs-751669 san=[127.0.0.1 192.168.85.2 embed-certs-751669 localhost minikube]
	I1016 19:43:26.763290  484119 provision.go:177] copyRemoteCerts
	I1016 19:43:26.763402  484119 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 19:43:26.763474  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:26.782648  484119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:43:26.892411  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 19:43:26.925051  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	W1016 19:43:23.984843  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	W1016 19:43:26.491329  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	I1016 19:43:26.958416  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1016 19:43:26.988815  484119 provision.go:87] duration metric: took 631.393907ms to configureAuth
	I1016 19:43:26.988839  484119 ubuntu.go:206] setting minikube options for container-runtime
	I1016 19:43:26.989019  484119 config.go:182] Loaded profile config "embed-certs-751669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:43:26.989115  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:27.014604  484119 main.go:141] libmachine: Using SSH client type: native
	I1016 19:43:27.014911  484119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1016 19:43:27.014925  484119 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 19:43:27.483967  484119 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 19:43:27.484034  484119 machine.go:96] duration metric: took 4.781187278s to provisionDockerMachine
	I1016 19:43:27.484059  484119 start.go:293] postStartSetup for "embed-certs-751669" (driver="docker")
	I1016 19:43:27.484087  484119 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 19:43:27.484182  484119 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 19:43:27.484249  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:27.515552  484119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:43:27.639236  484119 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 19:43:27.643379  484119 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 19:43:27.643406  484119 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 19:43:27.643418  484119 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 19:43:27.643472  484119 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 19:43:27.643550  484119 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 19:43:27.643651  484119 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 19:43:27.657041  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:43:27.692832  484119 start.go:296] duration metric: took 208.740895ms for postStartSetup
	I1016 19:43:27.692951  484119 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 19:43:27.693031  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:27.724532  484119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:43:27.834593  484119 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 19:43:27.839900  484119 fix.go:56] duration metric: took 5.571550145s for fixHost
	I1016 19:43:27.839922  484119 start.go:83] releasing machines lock for "embed-certs-751669", held for 5.571595495s
	I1016 19:43:27.840010  484119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-751669
	I1016 19:43:27.861385  484119 ssh_runner.go:195] Run: cat /version.json
	I1016 19:43:27.861443  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:27.861742  484119 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 19:43:27.861793  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:27.908387  484119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:43:27.913577  484119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:43:28.118781  484119 ssh_runner.go:195] Run: systemctl --version
	I1016 19:43:28.127661  484119 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 19:43:28.225524  484119 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 19:43:28.236718  484119 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 19:43:28.236802  484119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 19:43:28.250387  484119 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 19:43:28.250415  484119 start.go:495] detecting cgroup driver to use...
	I1016 19:43:28.250450  484119 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 19:43:28.250512  484119 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 19:43:28.277210  484119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 19:43:28.299739  484119 docker.go:218] disabling cri-docker service (if available) ...
	I1016 19:43:28.299810  484119 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 19:43:28.323166  484119 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 19:43:28.345579  484119 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 19:43:28.546300  484119 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 19:43:28.709183  484119 docker.go:234] disabling docker service ...
	I1016 19:43:28.709285  484119 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 19:43:28.725319  484119 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 19:43:28.742720  484119 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 19:43:28.904465  484119 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 19:43:29.081982  484119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 19:43:29.100132  484119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 19:43:29.118517  484119 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 19:43:29.118631  484119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:43:29.134467  484119 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 19:43:29.134564  484119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:43:29.143508  484119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:43:29.152633  484119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:43:29.164229  484119 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 19:43:29.174601  484119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:43:29.185294  484119 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:43:29.199713  484119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:43:29.211999  484119 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 19:43:29.224909  484119 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 19:43:29.233377  484119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:43:29.388952  484119 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 19:43:29.717014  484119 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 19:43:29.717172  484119 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 19:43:29.721781  484119 start.go:563] Will wait 60s for crictl version
	I1016 19:43:29.721900  484119 ssh_runner.go:195] Run: which crictl
	I1016 19:43:29.725833  484119 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 19:43:29.770706  484119 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 19:43:29.770865  484119 ssh_runner.go:195] Run: crio --version
	I1016 19:43:29.810901  484119 ssh_runner.go:195] Run: crio --version
	I1016 19:43:29.855938  484119 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 19:43:29.859281  484119 cli_runner.go:164] Run: docker network inspect embed-certs-751669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:43:29.880856  484119 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1016 19:43:29.885451  484119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:43:29.898805  484119 kubeadm.go:883] updating cluster {Name:embed-certs-751669 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-751669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 19:43:29.898923  484119 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:43:29.898981  484119 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:43:29.945632  484119 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:43:29.945653  484119 crio.go:433] Images already preloaded, skipping extraction
	I1016 19:43:29.945711  484119 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:43:29.995510  484119 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:43:29.995584  484119 cache_images.go:85] Images are preloaded, skipping loading
	I1016 19:43:29.995606  484119 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1016 19:43:29.995750  484119 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-751669 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-751669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 19:43:29.995865  484119 ssh_runner.go:195] Run: crio config
	I1016 19:43:30.107419  484119 cni.go:84] Creating CNI manager for ""
	I1016 19:43:30.107443  484119 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:43:30.107504  484119 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 19:43:30.107589  484119 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-751669 NodeName:embed-certs-751669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 19:43:30.107775  484119 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-751669"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 19:43:30.107878  484119 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 19:43:30.119319  484119 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 19:43:30.119397  484119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 19:43:30.131865  484119 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1016 19:43:30.156571  484119 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 19:43:30.176549  484119 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1016 19:43:30.194149  484119 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1016 19:43:30.199230  484119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:43:30.210357  484119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:43:30.371318  484119 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:43:30.390496  484119 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669 for IP: 192.168.85.2
	I1016 19:43:30.390563  484119 certs.go:195] generating shared ca certs ...
	I1016 19:43:30.390593  484119 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:43:30.390791  484119 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 19:43:30.390883  484119 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 19:43:30.390906  484119 certs.go:257] generating profile certs ...
	I1016 19:43:30.391035  484119 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/client.key
	I1016 19:43:30.391137  484119 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/apiserver.key.98c460c4
	I1016 19:43:30.391226  484119 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/proxy-client.key
	I1016 19:43:30.391433  484119 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 19:43:30.391511  484119 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 19:43:30.391536  484119 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 19:43:30.391594  484119 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 19:43:30.391636  484119 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 19:43:30.391695  484119 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 19:43:30.391770  484119 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:43:30.392673  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 19:43:30.425130  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 19:43:30.499911  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 19:43:30.584723  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 19:43:30.671903  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1016 19:43:30.733914  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1016 19:43:30.790730  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 19:43:30.814637  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 19:43:30.840412  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 19:43:30.868409  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 19:43:30.892479  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 19:43:30.921100  484119 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 19:43:30.942883  484119 ssh_runner.go:195] Run: openssl version
	I1016 19:43:30.952475  484119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 19:43:30.964638  484119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 19:43:30.968748  484119 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 19:43:30.968880  484119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 19:43:31.025265  484119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 19:43:31.036191  484119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 19:43:31.046843  484119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 19:43:31.051475  484119 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 19:43:31.051574  484119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 19:43:31.113280  484119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 19:43:31.137527  484119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 19:43:31.153815  484119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:43:31.159708  484119 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:43:31.159840  484119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:43:31.242568  484119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 19:43:31.253722  484119 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 19:43:31.258751  484119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 19:43:31.340961  484119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 19:43:31.432585  484119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 19:43:31.561794  484119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 19:43:31.648725  484119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 19:43:31.707918  484119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 19:43:31.809654  484119 kubeadm.go:400] StartCluster: {Name:embed-certs-751669 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-751669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:43:31.809807  484119 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 19:43:31.809907  484119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 19:43:31.866627  484119 cri.go:89] found id: "8a2a4e8f60de83dc93958769d40834e8c6e8098a4d24326639566a8eb761d219"
	I1016 19:43:31.866700  484119 cri.go:89] found id: "cdb6c8787e86665ba81ed5e2b63948fa8bd322ac9fe2eeaabc3de67e2ae1762a"
	I1016 19:43:31.866719  484119 cri.go:89] found id: "2368c8473fac0e17d1c889c89f8bd36e68e1075d0382ddf4f2ad6c01dcf5819f"
	I1016 19:43:31.866750  484119 cri.go:89] found id: "01a051b12eaa75566bd0ed32bda2684f339c52afc7b5e80f79acc29785a0fe59"
	I1016 19:43:31.866779  484119 cri.go:89] found id: ""
	I1016 19:43:31.866854  484119 ssh_runner.go:195] Run: sudo runc list -f json
	W1016 19:43:31.887219  484119 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:43:31Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:43:31.887342  484119 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 19:43:31.913062  484119 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 19:43:31.913146  484119 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 19:43:31.913227  484119 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 19:43:31.930933  484119 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 19:43:31.931630  484119 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-751669" does not appear in /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:43:31.931948  484119 kubeconfig.go:62] /home/jenkins/minikube-integration/21738-288457/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-751669" cluster setting kubeconfig missing "embed-certs-751669" context setting]
	I1016 19:43:31.932451  484119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:43:31.934318  484119 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 19:43:31.947399  484119 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1016 19:43:31.947481  484119 kubeadm.go:601] duration metric: took 34.315434ms to restartPrimaryControlPlane
	I1016 19:43:31.947505  484119 kubeadm.go:402] duration metric: took 137.868707ms to StartCluster
	I1016 19:43:31.947542  484119 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:43:31.947635  484119 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:43:31.949878  484119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:43:31.950419  484119 config.go:182] Loaded profile config "embed-certs-751669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:43:31.950526  484119 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 19:43:31.950602  484119 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-751669"
	I1016 19:43:31.950623  484119 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-751669"
	W1016 19:43:31.950629  484119 addons.go:247] addon storage-provisioner should already be in state true
	I1016 19:43:31.950650  484119 host.go:66] Checking if "embed-certs-751669" exists ...
	I1016 19:43:31.951190  484119 cli_runner.go:164] Run: docker container inspect embed-certs-751669 --format={{.State.Status}}
	I1016 19:43:31.951357  484119 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:43:31.951747  484119 addons.go:69] Setting dashboard=true in profile "embed-certs-751669"
	I1016 19:43:31.951801  484119 addons.go:238] Setting addon dashboard=true in "embed-certs-751669"
	W1016 19:43:31.951822  484119 addons.go:247] addon dashboard should already be in state true
	I1016 19:43:31.951897  484119 host.go:66] Checking if "embed-certs-751669" exists ...
	I1016 19:43:31.952508  484119 cli_runner.go:164] Run: docker container inspect embed-certs-751669 --format={{.State.Status}}
	I1016 19:43:31.952723  484119 addons.go:69] Setting default-storageclass=true in profile "embed-certs-751669"
	I1016 19:43:31.952772  484119 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-751669"
	I1016 19:43:31.953070  484119 cli_runner.go:164] Run: docker container inspect embed-certs-751669 --format={{.State.Status}}
	I1016 19:43:31.957445  484119 out.go:179] * Verifying Kubernetes components...
	I1016 19:43:31.963507  484119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:43:32.004694  484119 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 19:43:32.007847  484119 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:43:32.007882  484119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 19:43:32.007971  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:32.019823  484119 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1016 19:43:32.022680  484119 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1016 19:43:28.987082  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	W1016 19:43:31.484120  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	I1016 19:43:32.023470  484119 addons.go:238] Setting addon default-storageclass=true in "embed-certs-751669"
	W1016 19:43:32.023490  484119 addons.go:247] addon default-storageclass should already be in state true
	I1016 19:43:32.023515  484119 host.go:66] Checking if "embed-certs-751669" exists ...
	I1016 19:43:32.023967  484119 cli_runner.go:164] Run: docker container inspect embed-certs-751669 --format={{.State.Status}}
	I1016 19:43:32.026027  484119 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1016 19:43:32.026051  484119 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1016 19:43:32.026115  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:32.088630  484119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:43:32.090099  484119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:43:32.104490  484119 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 19:43:32.104516  484119 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 19:43:32.104585  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:32.129391  484119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:43:32.344352  484119 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:43:32.344848  484119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 19:43:32.352130  484119 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1016 19:43:32.352152  484119 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1016 19:43:32.403361  484119 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1016 19:43:32.403393  484119 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1016 19:43:32.407893  484119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:43:32.430310  484119 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1016 19:43:32.430335  484119 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1016 19:43:32.458785  484119 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1016 19:43:32.458872  484119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1016 19:43:32.507074  484119 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1016 19:43:32.507160  484119 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1016 19:43:32.575727  484119 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1016 19:43:32.575811  484119 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1016 19:43:32.594560  484119 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1016 19:43:32.594632  484119 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1016 19:43:32.609515  484119 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1016 19:43:32.609586  484119 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1016 19:43:32.627669  484119 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1016 19:43:32.627742  484119 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1016 19:43:32.654329  484119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1016 19:43:33.983653  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	W1016 19:43:35.985264  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	I1016 19:43:38.246686  484119 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.901795355s)
	I1016 19:43:38.246789  484119 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.902411666s)
	I1016 19:43:38.247160  484119 node_ready.go:35] waiting up to 6m0s for node "embed-certs-751669" to be "Ready" ...
	I1016 19:43:38.344708  484119 node_ready.go:49] node "embed-certs-751669" is "Ready"
	I1016 19:43:38.344737  484119 node_ready.go:38] duration metric: took 97.560816ms for node "embed-certs-751669" to be "Ready" ...
	I1016 19:43:38.344751  484119 api_server.go:52] waiting for apiserver process to appear ...
	I1016 19:43:38.344809  484119 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 19:43:39.494887  484119 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.086960972s)
	I1016 19:43:39.495017  484119 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.840606973s)
	I1016 19:43:39.495207  484119 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.150381429s)
	I1016 19:43:39.495224  484119 api_server.go:72] duration metric: took 7.543826369s to wait for apiserver process to appear ...
	I1016 19:43:39.495230  484119 api_server.go:88] waiting for apiserver healthz status ...
	I1016 19:43:39.495247  484119 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:43:39.498524  484119 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-751669 addons enable metrics-server
	
	I1016 19:43:39.501534  484119 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1016 19:43:39.504061  484119 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 19:43:39.504135  484119 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 19:43:39.505352  484119 addons.go:514] duration metric: took 7.554826197s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1016 19:43:39.995395  484119 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:43:40.012531  484119 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1016 19:43:40.016514  484119 api_server.go:141] control plane version: v1.34.1
	I1016 19:43:40.016602  484119 api_server.go:131] duration metric: took 521.363778ms to wait for apiserver health ...
	I1016 19:43:40.016629  484119 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 19:43:40.024388  484119 system_pods.go:59] 8 kube-system pods found
	I1016 19:43:40.024484  484119 system_pods.go:61] "coredns-66bc5c9577-2h6z6" [af34943c-9e1b-4fae-a8b8-815874618d70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:43:40.024519  484119 system_pods.go:61] "etcd-embed-certs-751669" [37b3cc63-0b45-4c80-ae4b-a06c4869d837] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 19:43:40.024544  484119 system_pods.go:61] "kindnet-cjx87" [95baa320-d051-4ea0-907e-d603971eb05a] Running
	I1016 19:43:40.024570  484119 system_pods.go:61] "kube-apiserver-embed-certs-751669" [d831a0cf-77fe-4e1c-b8b3-ee99ad90700b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 19:43:40.024604  484119 system_pods.go:61] "kube-controller-manager-embed-certs-751669" [833e0630-a2ac-4861-8b1c-ed28a314d799] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 19:43:40.024629  484119 system_pods.go:61] "kube-proxy-lvmlh" [6b56d13a-ca45-4f0d-92df-db96025be2e4] Running
	I1016 19:43:40.024655  484119 system_pods.go:61] "kube-scheduler-embed-certs-751669" [eeed62cd-46a6-4ec3-8e39-4f27a264982e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 19:43:40.024689  484119 system_pods.go:61] "storage-provisioner" [139c88ca-0616-415b-91d4-03e93ae02f70] Running
	I1016 19:43:40.024715  484119 system_pods.go:74] duration metric: took 8.064856ms to wait for pod list to return data ...
	I1016 19:43:40.024739  484119 default_sa.go:34] waiting for default service account to be created ...
	I1016 19:43:40.028445  484119 default_sa.go:45] found service account: "default"
	I1016 19:43:40.028525  484119 default_sa.go:55] duration metric: took 3.760089ms for default service account to be created ...
	I1016 19:43:40.028553  484119 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 19:43:40.032930  484119 system_pods.go:86] 8 kube-system pods found
	I1016 19:43:40.033011  484119 system_pods.go:89] "coredns-66bc5c9577-2h6z6" [af34943c-9e1b-4fae-a8b8-815874618d70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:43:40.033036  484119 system_pods.go:89] "etcd-embed-certs-751669" [37b3cc63-0b45-4c80-ae4b-a06c4869d837] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 19:43:40.033059  484119 system_pods.go:89] "kindnet-cjx87" [95baa320-d051-4ea0-907e-d603971eb05a] Running
	I1016 19:43:40.033081  484119 system_pods.go:89] "kube-apiserver-embed-certs-751669" [d831a0cf-77fe-4e1c-b8b3-ee99ad90700b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 19:43:40.033103  484119 system_pods.go:89] "kube-controller-manager-embed-certs-751669" [833e0630-a2ac-4861-8b1c-ed28a314d799] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 19:43:40.033125  484119 system_pods.go:89] "kube-proxy-lvmlh" [6b56d13a-ca45-4f0d-92df-db96025be2e4] Running
	I1016 19:43:40.033167  484119 system_pods.go:89] "kube-scheduler-embed-certs-751669" [eeed62cd-46a6-4ec3-8e39-4f27a264982e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 19:43:40.033188  484119 system_pods.go:89] "storage-provisioner" [139c88ca-0616-415b-91d4-03e93ae02f70] Running
	I1016 19:43:40.033212  484119 system_pods.go:126] duration metric: took 4.641099ms to wait for k8s-apps to be running ...
	I1016 19:43:40.033235  484119 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 19:43:40.033329  484119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:43:40.053917  484119 system_svc.go:56] duration metric: took 20.662994ms WaitForService to wait for kubelet
	I1016 19:43:40.053998  484119 kubeadm.go:586] duration metric: took 8.102588062s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:43:40.054033  484119 node_conditions.go:102] verifying NodePressure condition ...
	I1016 19:43:40.059362  484119 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 19:43:40.059449  484119 node_conditions.go:123] node cpu capacity is 2
	I1016 19:43:40.059479  484119 node_conditions.go:105] duration metric: took 5.414432ms to run NodePressure ...
	I1016 19:43:40.059505  484119 start.go:241] waiting for startup goroutines ...
	I1016 19:43:40.059545  484119 start.go:246] waiting for cluster config update ...
	I1016 19:43:40.059570  484119 start.go:255] writing updated cluster config ...
	I1016 19:43:40.059953  484119 ssh_runner.go:195] Run: rm -f paused
	I1016 19:43:40.064592  484119 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:43:40.069615  484119 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2h6z6" in "kube-system" namespace to be "Ready" or be gone ...
	W1016 19:43:38.485739  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	W1016 19:43:40.983982  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	W1016 19:43:42.084369  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	W1016 19:43:44.575907  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	W1016 19:43:46.577259  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	W1016 19:43:42.984333  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	W1016 19:43:45.484907  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	W1016 19:43:48.577842  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	W1016 19:43:50.583583  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	W1016 19:43:47.985123  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	W1016 19:43:50.490408  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	I1016 19:43:51.983704  481534 pod_ready.go:94] pod "coredns-66bc5c9577-jr55z" is "Ready"
	I1016 19:43:51.983731  481534 pod_ready.go:86] duration metric: took 35.005287371s for pod "coredns-66bc5c9577-jr55z" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:51.986571  481534 pod_ready.go:83] waiting for pod "etcd-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:51.991448  481534 pod_ready.go:94] pod "etcd-no-preload-225696" is "Ready"
	I1016 19:43:51.991479  481534 pod_ready.go:86] duration metric: took 4.882726ms for pod "etcd-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:51.993927  481534 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:51.999836  481534 pod_ready.go:94] pod "kube-apiserver-no-preload-225696" is "Ready"
	I1016 19:43:51.999867  481534 pod_ready.go:86] duration metric: took 5.912079ms for pod "kube-apiserver-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:52.014951  481534 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:52.181561  481534 pod_ready.go:94] pod "kube-controller-manager-no-preload-225696" is "Ready"
	I1016 19:43:52.181590  481534 pod_ready.go:86] duration metric: took 166.609633ms for pod "kube-controller-manager-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:52.381611  481534 pod_ready.go:83] waiting for pod "kube-proxy-m86rv" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:52.781846  481534 pod_ready.go:94] pod "kube-proxy-m86rv" is "Ready"
	I1016 19:43:52.781873  481534 pod_ready.go:86] duration metric: took 400.235214ms for pod "kube-proxy-m86rv" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:52.983109  481534 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:53.382239  481534 pod_ready.go:94] pod "kube-scheduler-no-preload-225696" is "Ready"
	I1016 19:43:53.382268  481534 pod_ready.go:86] duration metric: took 399.132362ms for pod "kube-scheduler-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:53.382280  481534 pod_ready.go:40] duration metric: took 36.408094274s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:43:53.436731  481534 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1016 19:43:53.440081  481534 out.go:179] * Done! kubectl is now configured to use "no-preload-225696" cluster and "default" namespace by default
	W1016 19:43:53.075759  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	W1016 19:43:55.575355  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	W1016 19:43:58.075280  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	W1016 19:44:00.100477  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	W1016 19:44:02.576476  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	W1016 19:44:05.079107  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.564288295Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cc9d3209-d506-460e-9cf8-4f5bb66fd9eb name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.565858502Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9292701e-67d3-4161-aaf2-47000b76ec40 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.567153581Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn/dashboard-metrics-scraper" id=90f5f41f-28c7-4248-9230-028f3c37c92f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.567368385Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.584105858Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.584676252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.608458067Z" level=info msg="Created container 1f991b7f7f42165c9ce22614ac2f32519f7d9551f623c3c068b920302279e3d0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn/dashboard-metrics-scraper" id=90f5f41f-28c7-4248-9230-028f3c37c92f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.609690104Z" level=info msg="Starting container: 1f991b7f7f42165c9ce22614ac2f32519f7d9551f623c3c068b920302279e3d0" id=31845089-ccc7-49c5-8cc4-0dfcbf495d9a name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.613575151Z" level=info msg="Started container" PID=1635 containerID=1f991b7f7f42165c9ce22614ac2f32519f7d9551f623c3c068b920302279e3d0 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn/dashboard-metrics-scraper id=31845089-ccc7-49c5-8cc4-0dfcbf495d9a name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b163066cd8c92c93bca83ce76d361b25618eecf31f643b56c7f368294a7088a
	Oct 16 19:43:50 no-preload-225696 conmon[1633]: conmon 1f991b7f7f42165c9ce2 <ninfo>: container 1635 exited with status 1
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.817752317Z" level=info msg="Removing container: f140fdd3dab2e6d49e4f7c00ef0e58c5f29eba3af5d217dc402533cee1bbbced" id=538894d2-f421-49d2-925c-7dabcf8b0010 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.827858746Z" level=info msg="Error loading conmon cgroup of container f140fdd3dab2e6d49e4f7c00ef0e58c5f29eba3af5d217dc402533cee1bbbced: cgroup deleted" id=538894d2-f421-49d2-925c-7dabcf8b0010 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.838280337Z" level=info msg="Removed container f140fdd3dab2e6d49e4f7c00ef0e58c5f29eba3af5d217dc402533cee1bbbced: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn/dashboard-metrics-scraper" id=538894d2-f421-49d2-925c-7dabcf8b0010 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.608240132Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.615443425Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.61547938Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.61550178Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.618703348Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.618736727Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.618761171Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.622184149Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.622335362Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.622571623Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.626859612Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.626892309Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	1f991b7f7f421       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   3b163066cd8c9       dashboard-metrics-scraper-6ffb444bf9-4xtkn   kubernetes-dashboard
	61e433028e9f5       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           21 seconds ago      Running             storage-provisioner         2                   f1f2fea3bb0cc       storage-provisioner                          kube-system
	825fe7e210b26       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago      Running             kubernetes-dashboard        0                   7e473745e6c21       kubernetes-dashboard-855c9754f9-d6pcj        kubernetes-dashboard
	36e68cde0cbee       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago      Running             coredns                     1                   5d993905d1f0d       coredns-66bc5c9577-jr55z                     kube-system
	3692cc5de998b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   bd2f1c31274b1       kindnet-kfg52                                kube-system
	574d84457d8bf       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   85dacd4bbaa25       busybox                                      default
	e3b885bb4fb97       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago      Running             kube-proxy                  1                   92064e38d1c3e       kube-proxy-m86rv                             kube-system
	41d9ccf1929d9       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           52 seconds ago      Exited              storage-provisioner         1                   f1f2fea3bb0cc       storage-provisioner                          kube-system
	7300b15e4085a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           56 seconds ago      Running             kube-apiserver              1                   e6c40dc4677fc       kube-apiserver-no-preload-225696             kube-system
	54c3315a98e54       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           56 seconds ago      Running             kube-controller-manager     1                   2ae11930d2c32       kube-controller-manager-no-preload-225696    kube-system
	3ba8ff04c879c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           56 seconds ago      Running             kube-scheduler              1                   0db94b543a1da       kube-scheduler-no-preload-225696             kube-system
	948a539396c16       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           56 seconds ago      Running             etcd                        1                   f9950375cf903       etcd-no-preload-225696                       kube-system
	
	
	==> coredns [36e68cde0cbee113287402c3971f42685f0998bc56e6d7f67c52fb9aeb37e79f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46376 - 20167 "HINFO IN 5168467559581767183.966991120985240760. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.007628714s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-225696
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-225696
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=no-preload-225696
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T19_42_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 19:42:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-225696
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:43:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:43:46 +0000   Thu, 16 Oct 2025 19:42:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:43:46 +0000   Thu, 16 Oct 2025 19:42:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:43:46 +0000   Thu, 16 Oct 2025 19:42:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 19:43:46 +0000   Thu, 16 Oct 2025 19:42:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-225696
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                7c11c781-d716-4555-8158-86dd5d9b993e
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-jr55z                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-no-preload-225696                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         113s
	  kube-system                 kindnet-kfg52                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-225696              250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-225696     200m (10%)    0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-m86rv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-225696              100m (5%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4xtkn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-d6pcj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 106s                 kube-proxy       
	  Normal   Starting                 51s                  kube-proxy       
	  Normal   Starting                 2m2s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m2s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m2s (x4 over 2m2s)  kubelet          Node no-preload-225696 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m2s (x4 over 2m2s)  kubelet          Node no-preload-225696 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m2s (x4 over 2m2s)  kubelet          Node no-preload-225696 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    113s                 kubelet          Node no-preload-225696 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 113s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  113s                 kubelet          Node no-preload-225696 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     113s                 kubelet          Node no-preload-225696 status is now: NodeHasSufficientPID
	  Normal   Starting                 113s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           109s                 node-controller  Node no-preload-225696 event: Registered Node no-preload-225696 in Controller
	  Normal   NodeReady                94s                  kubelet          Node no-preload-225696 status is now: NodeReady
	  Normal   Starting                 58s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 58s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  58s (x8 over 58s)    kubelet          Node no-preload-225696 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x8 over 58s)    kubelet          Node no-preload-225696 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x8 over 58s)    kubelet          Node no-preload-225696 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                  node-controller  Node no-preload-225696 event: Registered Node no-preload-225696 in Controller
	
	
	==> dmesg <==
	[Oct16 19:19] overlayfs: idmapped layers are currently not supported
	[Oct16 19:20] overlayfs: idmapped layers are currently not supported
	[Oct16 19:21] overlayfs: idmapped layers are currently not supported
	[Oct16 19:22] overlayfs: idmapped layers are currently not supported
	[  +5.025487] overlayfs: idmapped layers are currently not supported
	[Oct16 19:23] overlayfs: idmapped layers are currently not supported
	[ +28.397927] overlayfs: idmapped layers are currently not supported
	[Oct16 19:24] overlayfs: idmapped layers are currently not supported
	[ +25.533019] overlayfs: idmapped layers are currently not supported
	[Oct16 19:26] overlayfs: idmapped layers are currently not supported
	[Oct16 19:27] overlayfs: idmapped layers are currently not supported
	[Oct16 19:29] overlayfs: idmapped layers are currently not supported
	[Oct16 19:31] overlayfs: idmapped layers are currently not supported
	[Oct16 19:32] overlayfs: idmapped layers are currently not supported
	[Oct16 19:34] overlayfs: idmapped layers are currently not supported
	[Oct16 19:36] overlayfs: idmapped layers are currently not supported
	[Oct16 19:37] overlayfs: idmapped layers are currently not supported
	[  +8.490329] overlayfs: idmapped layers are currently not supported
	[Oct16 19:38] overlayfs: idmapped layers are currently not supported
	[Oct16 19:39] overlayfs: idmapped layers are currently not supported
	[Oct16 19:40] overlayfs: idmapped layers are currently not supported
	[Oct16 19:41] overlayfs: idmapped layers are currently not supported
	[ +20.605853] overlayfs: idmapped layers are currently not supported
	[Oct16 19:43] overlayfs: idmapped layers are currently not supported
	[ +20.110477] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [948a539396c168da2900996f537d4295485126181c9390e8ecf95665342f725d] <==
	{"level":"warn","ts":"2025-10-16T19:43:13.262357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.303156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.303414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.332344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.352027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.372155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.404884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.411227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.423788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.452172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.472110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.504002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.527818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.549211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.567383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.583440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.605525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.621232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.643893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.659425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.678665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.711306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.731126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.792290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.889417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44764","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:44:08 up  2:26,  0 user,  load average: 3.45, 3.58, 2.99
	Linux no-preload-225696 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3692cc5de998b90aae84a96921c2274a4037e62497227812a010c277bf893a25] <==
	I1016 19:43:16.327200       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 19:43:16.409899       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1016 19:43:16.410165       1 main.go:148] setting mtu 1500 for CNI 
	I1016 19:43:16.410219       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 19:43:16.410259       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T19:43:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 19:43:16.607063       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 19:43:16.607166       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 19:43:16.607202       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 19:43:16.607699       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1016 19:43:46.608100       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1016 19:43:46.608243       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1016 19:43:46.608337       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1016 19:43:46.609590       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1016 19:43:48.107455       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 19:43:48.107559       1 metrics.go:72] Registering metrics
	I1016 19:43:48.107643       1 controller.go:711] "Syncing nftables rules"
	I1016 19:43:56.607315       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:43:56.607975       1 main.go:301] handling current node
	I1016 19:44:06.615094       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:44:06.615128       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7300b15e4085a66cb68787117e92bb710eb0d1215ec993db5fb84c3d949130d8] <==
	I1016 19:43:15.348587       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1016 19:43:15.355624       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1016 19:43:15.355656       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1016 19:43:15.380954       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1016 19:43:15.383219       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1016 19:43:15.383252       1 policy_source.go:240] refreshing policies
	I1016 19:43:15.383312       1 cache.go:39] Caches are synced for autoregister controller
	I1016 19:43:15.387316       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1016 19:43:15.387371       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1016 19:43:15.397239       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 19:43:15.403731       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 19:43:15.406993       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1016 19:43:15.416374       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1016 19:43:15.418217       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1016 19:43:15.631977       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 19:43:15.805487       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 19:43:15.975469       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 19:43:16.131313       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 19:43:16.257805       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 19:43:16.291004       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 19:43:16.450835       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.107.60"}
	I1016 19:43:16.475442       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.236.223"}
	I1016 19:43:18.799522       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 19:43:19.200750       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 19:43:19.248626       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [54c3315a98e54e9dea40491fb54e4522a7a4b2f2741c1db37a3baf94aa4ca7fe] <==
	I1016 19:43:18.793553       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1016 19:43:18.793876       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 19:43:18.796733       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1016 19:43:18.797067       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1016 19:43:18.800382       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1016 19:43:18.801638       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:43:18.817618       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 19:43:18.823943       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:43:18.828123       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:43:18.829255       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1016 19:43:18.831486       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1016 19:43:18.834851       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:43:18.834880       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 19:43:18.834888       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 19:43:18.842028       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1016 19:43:18.842220       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1016 19:43:18.842250       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 19:43:18.842633       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1016 19:43:18.846056       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1016 19:43:18.846096       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1016 19:43:18.846159       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1016 19:43:18.846193       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1016 19:43:18.846225       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-225696"
	I1016 19:43:18.846273       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1016 19:43:18.856038       1 shared_informer.go:356] "Caches are synced" controller="GC"
	
	
	==> kube-proxy [e3b885bb4fb971bce2efdf7f5ef86bd41c06a2df486460d3723e0cafcf13050c] <==
	I1016 19:43:16.761922       1 server_linux.go:53] "Using iptables proxy"
	I1016 19:43:16.931015       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 19:43:17.031686       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 19:43:17.031802       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1016 19:43:17.031901       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 19:43:17.062859       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 19:43:17.062979       1 server_linux.go:132] "Using iptables Proxier"
	I1016 19:43:17.067110       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 19:43:17.067457       1 server.go:527] "Version info" version="v1.34.1"
	I1016 19:43:17.067647       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:43:17.069007       1 config.go:200] "Starting service config controller"
	I1016 19:43:17.069072       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 19:43:17.069116       1 config.go:106] "Starting endpoint slice config controller"
	I1016 19:43:17.069341       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 19:43:17.069416       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 19:43:17.069445       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 19:43:17.070122       1 config.go:309] "Starting node config controller"
	I1016 19:43:17.073057       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 19:43:17.073167       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 19:43:17.169661       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 19:43:17.169703       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 19:43:17.169750       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3ba8ff04c879c0b8622800d55c14e4e53ce7edc4fc8527ba00de12d8cf1436a8] <==
	I1016 19:43:14.123229       1 serving.go:386] Generated self-signed cert in-memory
	I1016 19:43:17.205126       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1016 19:43:17.205264       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:43:17.216619       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 19:43:17.216753       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:43:17.218572       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:43:17.216732       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1016 19:43:17.218661       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1016 19:43:17.216764       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:43:17.219007       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:43:17.216777       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 19:43:17.318716       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:43:17.319493       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:43:17.319560       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 16 19:43:19 no-preload-225696 kubelet[768]: I1016 19:43:19.518273     768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4xtkn\" (UID: \"1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn"
	Oct 16 19:43:19 no-preload-225696 kubelet[768]: I1016 19:43:19.518329     768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbld5\" (UniqueName: \"kubernetes.io/projected/1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33-kube-api-access-nbld5\") pod \"dashboard-metrics-scraper-6ffb444bf9-4xtkn\" (UID: \"1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn"
	Oct 16 19:43:19 no-preload-225696 kubelet[768]: W1016 19:43:19.724124     768 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a/crio-7e473745e6c21ce854a519f0507e84760bd7b48e03749910e29176ede968641a WatchSource:0}: Error finding container 7e473745e6c21ce854a519f0507e84760bd7b48e03749910e29176ede968641a: Status 404 returned error can't find the container with id 7e473745e6c21ce854a519f0507e84760bd7b48e03749910e29176ede968641a
	Oct 16 19:43:19 no-preload-225696 kubelet[768]: W1016 19:43:19.740628     768 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a/crio-3b163066cd8c92c93bca83ce76d361b25618eecf31f643b56c7f368294a7088a WatchSource:0}: Error finding container 3b163066cd8c92c93bca83ce76d361b25618eecf31f643b56c7f368294a7088a: Status 404 returned error can't find the container with id 3b163066cd8c92c93bca83ce76d361b25618eecf31f643b56c7f368294a7088a
	Oct 16 19:43:21 no-preload-225696 kubelet[768]: I1016 19:43:21.675550     768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 16 19:43:30 no-preload-225696 kubelet[768]: I1016 19:43:30.110050     768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-d6pcj" podStartSLOduration=5.260268199 podStartE2EDuration="11.109735855s" podCreationTimestamp="2025-10-16 19:43:19 +0000 UTC" firstStartedPulling="2025-10-16 19:43:19.728744232 +0000 UTC m=+9.334207639" lastFinishedPulling="2025-10-16 19:43:25.578211889 +0000 UTC m=+15.183675295" observedRunningTime="2025-10-16 19:43:25.720819607 +0000 UTC m=+15.326283022" watchObservedRunningTime="2025-10-16 19:43:30.109735855 +0000 UTC m=+19.715199270"
	Oct 16 19:43:31 no-preload-225696 kubelet[768]: I1016 19:43:31.766952     768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn" podStartSLOduration=0.865429377 podStartE2EDuration="12.766932902s" podCreationTimestamp="2025-10-16 19:43:19 +0000 UTC" firstStartedPulling="2025-10-16 19:43:19.745876946 +0000 UTC m=+9.351340353" lastFinishedPulling="2025-10-16 19:43:31.647380471 +0000 UTC m=+21.252843878" observedRunningTime="2025-10-16 19:43:31.766535088 +0000 UTC m=+21.371998503" watchObservedRunningTime="2025-10-16 19:43:31.766932902 +0000 UTC m=+21.372396374"
	Oct 16 19:43:32 no-preload-225696 kubelet[768]: I1016 19:43:32.750023     768 scope.go:117] "RemoveContainer" containerID="4e0d282a3f82cfa941e8e464ba009fd304cdf2b8ab2058c46497336badbc3818"
	Oct 16 19:43:33 no-preload-225696 kubelet[768]: I1016 19:43:33.753982     768 scope.go:117] "RemoveContainer" containerID="4e0d282a3f82cfa941e8e464ba009fd304cdf2b8ab2058c46497336badbc3818"
	Oct 16 19:43:33 no-preload-225696 kubelet[768]: I1016 19:43:33.754281     768 scope.go:117] "RemoveContainer" containerID="f140fdd3dab2e6d49e4f7c00ef0e58c5f29eba3af5d217dc402533cee1bbbced"
	Oct 16 19:43:33 no-preload-225696 kubelet[768]: E1016 19:43:33.754436     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4xtkn_kubernetes-dashboard(1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn" podUID="1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33"
	Oct 16 19:43:34 no-preload-225696 kubelet[768]: I1016 19:43:34.758231     768 scope.go:117] "RemoveContainer" containerID="f140fdd3dab2e6d49e4f7c00ef0e58c5f29eba3af5d217dc402533cee1bbbced"
	Oct 16 19:43:34 no-preload-225696 kubelet[768]: E1016 19:43:34.758401     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4xtkn_kubernetes-dashboard(1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn" podUID="1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33"
	Oct 16 19:43:36 no-preload-225696 kubelet[768]: I1016 19:43:36.007790     768 scope.go:117] "RemoveContainer" containerID="f140fdd3dab2e6d49e4f7c00ef0e58c5f29eba3af5d217dc402533cee1bbbced"
	Oct 16 19:43:36 no-preload-225696 kubelet[768]: E1016 19:43:36.008576     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4xtkn_kubernetes-dashboard(1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn" podUID="1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33"
	Oct 16 19:43:46 no-preload-225696 kubelet[768]: I1016 19:43:46.791857     768 scope.go:117] "RemoveContainer" containerID="41d9ccf1929d9d999832642ca90ea604512d03a91d987faa66ae896de2f7d34f"
	Oct 16 19:43:50 no-preload-225696 kubelet[768]: I1016 19:43:50.563386     768 scope.go:117] "RemoveContainer" containerID="f140fdd3dab2e6d49e4f7c00ef0e58c5f29eba3af5d217dc402533cee1bbbced"
	Oct 16 19:43:50 no-preload-225696 kubelet[768]: I1016 19:43:50.804987     768 scope.go:117] "RemoveContainer" containerID="f140fdd3dab2e6d49e4f7c00ef0e58c5f29eba3af5d217dc402533cee1bbbced"
	Oct 16 19:43:50 no-preload-225696 kubelet[768]: I1016 19:43:50.805553     768 scope.go:117] "RemoveContainer" containerID="1f991b7f7f42165c9ce22614ac2f32519f7d9551f623c3c068b920302279e3d0"
	Oct 16 19:43:50 no-preload-225696 kubelet[768]: E1016 19:43:50.805713     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4xtkn_kubernetes-dashboard(1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn" podUID="1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33"
	Oct 16 19:43:56 no-preload-225696 kubelet[768]: I1016 19:43:56.008227     768 scope.go:117] "RemoveContainer" containerID="1f991b7f7f42165c9ce22614ac2f32519f7d9551f623c3c068b920302279e3d0"
	Oct 16 19:43:56 no-preload-225696 kubelet[768]: E1016 19:43:56.009025     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4xtkn_kubernetes-dashboard(1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn" podUID="1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33"
	Oct 16 19:44:05 no-preload-225696 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 19:44:05 no-preload-225696 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 19:44:05 no-preload-225696 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [825fe7e210b26805cdb54da81644fbf342aa5e2833a84251a10b17d560a4d1fd] <==
	2025/10/16 19:43:25 Starting overwatch
	2025/10/16 19:43:25 Using namespace: kubernetes-dashboard
	2025/10/16 19:43:25 Using in-cluster config to connect to apiserver
	2025/10/16 19:43:25 Using secret token for csrf signing
	2025/10/16 19:43:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/16 19:43:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/16 19:43:25 Successful initial request to the apiserver, version: v1.34.1
	2025/10/16 19:43:25 Generating JWE encryption key
	2025/10/16 19:43:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/16 19:43:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/16 19:43:26 Initializing JWE encryption key from synchronized object
	2025/10/16 19:43:26 Creating in-cluster Sidecar client
	2025/10/16 19:43:26 Serving insecurely on HTTP port: 9090
	2025/10/16 19:43:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 19:43:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [41d9ccf1929d9d999832642ca90ea604512d03a91d987faa66ae896de2f7d34f] <==
	I1016 19:43:16.269396       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1016 19:43:46.281797       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [61e433028e9f5d9205876d98bffe8dc107dca16c19f9fc0816fd23296b3d01cd] <==
	I1016 19:43:46.917600       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 19:43:46.934517       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 19:43:46.934575       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1016 19:43:46.950281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:43:50.409033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:43:54.669751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:43:58.267562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:01.321494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:04.343184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:04.350957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 19:44:04.351113       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 19:44:04.351385       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-225696_2f5664f1-5f42-4044-82e1-c351401a5215!
	I1016 19:44:04.352315       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62f60c8b-5f75-4039-9f5b-c9731950c343", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-225696_2f5664f1-5f42-4044-82e1-c351401a5215 became leader
	W1016 19:44:04.361937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:04.365114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 19:44:04.451525       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-225696_2f5664f1-5f42-4044-82e1-c351401a5215!
	W1016 19:44:06.368368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:06.374436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:08.377939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:08.382715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-225696 -n no-preload-225696
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-225696 -n no-preload-225696: exit status 2 (392.494423ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-225696 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-225696
helpers_test.go:243: (dbg) docker inspect no-preload-225696:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a",
	        "Created": "2025-10-16T19:41:24.445990771Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 481663,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T19:43:02.608335315Z",
	            "FinishedAt": "2025-10-16T19:43:01.716326434Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a/hostname",
	        "HostsPath": "/var/lib/docker/containers/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a/hosts",
	        "LogPath": "/var/lib/docker/containers/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a-json.log",
	        "Name": "/no-preload-225696",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-225696:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-225696",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a",
	                "LowerDir": "/var/lib/docker/overlay2/07a6d3c2127f7badb81b1849c80b08dc8506200efbd30f222dfd4c5a220091b0-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/07a6d3c2127f7badb81b1849c80b08dc8506200efbd30f222dfd4c5a220091b0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/07a6d3c2127f7badb81b1849c80b08dc8506200efbd30f222dfd4c5a220091b0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/07a6d3c2127f7badb81b1849c80b08dc8506200efbd30f222dfd4c5a220091b0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-225696",
	                "Source": "/var/lib/docker/volumes/no-preload-225696/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-225696",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-225696",
	                "name.minikube.sigs.k8s.io": "no-preload-225696",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14cc910467f046f6a0ecabc7f40eb7360c7d6ed0b7a5d7970c1f96646ec908e7",
	            "SandboxKey": "/var/run/docker/netns/14cc910467f0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-225696": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:05:c3:30:72:a6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "39b67ad0eeb0bd39715bf4033d345a54f5da2b5672e2db285dbc6c4fed23f45e",
	                    "EndpointID": "ce69aef8604ca352d71031cf8a96d77a6a70307a1027b3d5c93f9467e9b57759",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-225696",
	                        "67fd0d064b81"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-225696 -n no-preload-225696
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-225696 -n no-preload-225696: exit status 2 (351.679184ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-225696 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-225696 logs -n 25: (1.385988622s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-853056 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-853056    │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:38 UTC │
	│ delete  │ -p cert-options-853056                                                                                                                                                                                                                        │ cert-options-853056    │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:38 UTC │
	│ start   │ -p old-k8s-version-663330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-663330 │ jenkins │ v1.37.0 │ 16 Oct 25 19:38 UTC │ 16 Oct 25 19:39 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-663330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-663330 │ jenkins │ v1.37.0 │ 16 Oct 25 19:39 UTC │                     │
	│ stop    │ -p old-k8s-version-663330 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-663330 │ jenkins │ v1.37.0 │ 16 Oct 25 19:39 UTC │ 16 Oct 25 19:40 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-663330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-663330 │ jenkins │ v1.37.0 │ 16 Oct 25 19:40 UTC │ 16 Oct 25 19:40 UTC │
	│ start   │ -p old-k8s-version-663330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-663330 │ jenkins │ v1.37.0 │ 16 Oct 25 19:40 UTC │ 16 Oct 25 19:40 UTC │
	│ start   │ -p cert-expiration-828182 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-828182 │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ image   │ old-k8s-version-663330 image list --format=json                                                                                                                                                                                               │ old-k8s-version-663330 │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-663330 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-663330 │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │                     │
	│ delete  │ -p old-k8s-version-663330                                                                                                                                                                                                                     │ old-k8s-version-663330 │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ delete  │ -p cert-expiration-828182                                                                                                                                                                                                                     │ cert-expiration-828182 │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-663330                                                                                                                                                                                                                     │ old-k8s-version-663330 │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-225696 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-225696      │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:42 UTC │
	│ start   │ -p embed-certs-751669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-751669     │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p no-preload-225696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-225696      │ jenkins │ v1.37.0 │ 16 Oct 25 19:42 UTC │                     │
	│ stop    │ -p no-preload-225696 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-225696      │ jenkins │ v1.37.0 │ 16 Oct 25 19:42 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable dashboard -p no-preload-225696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-225696      │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ start   │ -p no-preload-225696 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-225696      │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-751669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-751669     │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │                     │
	│ stop    │ -p embed-certs-751669 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-751669     │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-751669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-751669     │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ start   │ -p embed-certs-751669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-751669     │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │                     │
	│ image   │ no-preload-225696 image list --format=json                                                                                                                                                                                                    │ no-preload-225696      │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ pause   │ -p no-preload-225696 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-225696      │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 19:43:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 19:43:21.948837  484119 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:43:21.948936  484119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:43:21.948941  484119 out.go:374] Setting ErrFile to fd 2...
	I1016 19:43:21.948946  484119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:43:21.949268  484119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:43:21.949681  484119 out.go:368] Setting JSON to false
	I1016 19:43:21.951749  484119 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8731,"bootTime":1760635071,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 19:43:21.951824  484119 start.go:141] virtualization:  
	I1016 19:43:21.954613  484119 out.go:179] * [embed-certs-751669] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 19:43:21.958657  484119 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 19:43:21.958784  484119 notify.go:220] Checking for updates...
	I1016 19:43:21.965654  484119 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 19:43:21.968620  484119 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:43:21.971548  484119 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 19:43:21.974453  484119 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 19:43:21.977605  484119 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 19:43:21.980972  484119 config.go:182] Loaded profile config "embed-certs-751669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:43:21.981597  484119 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 19:43:22.022084  484119 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 19:43:22.022221  484119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:43:22.143466  484119 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-16 19:43:22.130430664 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:43:22.143571  484119 docker.go:318] overlay module found
	I1016 19:43:22.147312  484119 out.go:179] * Using the docker driver based on existing profile
	I1016 19:43:22.150363  484119 start.go:305] selected driver: docker
	I1016 19:43:22.150384  484119 start.go:925] validating driver "docker" against &{Name:embed-certs-751669 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-751669 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:43:22.150493  484119 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 19:43:22.151220  484119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:43:22.233488  484119 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-16 19:43:22.224234315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:43:22.233848  484119 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:43:22.233886  484119 cni.go:84] Creating CNI manager for ""
	I1016 19:43:22.233952  484119 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:43:22.234000  484119 start.go:349] cluster config:
	{Name:embed-certs-751669 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-751669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:43:22.237395  484119 out.go:179] * Starting "embed-certs-751669" primary control-plane node in "embed-certs-751669" cluster
	I1016 19:43:22.241180  484119 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 19:43:22.244200  484119 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 19:43:22.247199  484119 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:43:22.247276  484119 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 19:43:22.247287  484119 cache.go:58] Caching tarball of preloaded images
	I1016 19:43:22.247383  484119 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 19:43:22.247393  484119 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 19:43:22.247513  484119 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/config.json ...
	I1016 19:43:22.247756  484119 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 19:43:22.268212  484119 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 19:43:22.268230  484119 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 19:43:22.268243  484119 cache.go:232] Successfully downloaded all kic artifacts
	I1016 19:43:22.268265  484119 start.go:360] acquireMachinesLock for embed-certs-751669: {Name:mkb92787bce004fe7aa2e02dbed85cdecf06ce4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:43:22.268318  484119 start.go:364] duration metric: took 35.972µs to acquireMachinesLock for "embed-certs-751669"
	I1016 19:43:22.268337  484119 start.go:96] Skipping create...Using existing machine configuration
	I1016 19:43:22.268342  484119 fix.go:54] fixHost starting: 
	I1016 19:43:22.268601  484119 cli_runner.go:164] Run: docker container inspect embed-certs-751669 --format={{.State.Status}}
	I1016 19:43:22.299548  484119 fix.go:112] recreateIfNeeded on embed-certs-751669: state=Stopped err=<nil>
	W1016 19:43:22.299629  484119 fix.go:138] unexpected machine state, will restart: <nil>
	W1016 19:43:18.984831  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	W1016 19:43:21.486847  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	I1016 19:43:22.302957  484119 out.go:252] * Restarting existing docker container for "embed-certs-751669" ...
	I1016 19:43:22.303121  484119 cli_runner.go:164] Run: docker start embed-certs-751669
	I1016 19:43:22.634711  484119 cli_runner.go:164] Run: docker container inspect embed-certs-751669 --format={{.State.Status}}
	I1016 19:43:22.663278  484119 kic.go:430] container "embed-certs-751669" state is running.
	I1016 19:43:22.663673  484119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-751669
	I1016 19:43:22.702623  484119 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/config.json ...
	I1016 19:43:22.702837  484119 machine.go:93] provisionDockerMachine start ...
	I1016 19:43:22.702893  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:22.738396  484119 main.go:141] libmachine: Using SSH client type: native
	I1016 19:43:22.739164  484119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1016 19:43:22.739185  484119 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 19:43:22.739861  484119 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 19:43:25.908933  484119 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-751669
	
	I1016 19:43:25.908994  484119 ubuntu.go:182] provisioning hostname "embed-certs-751669"
	I1016 19:43:25.909077  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:25.936459  484119 main.go:141] libmachine: Using SSH client type: native
	I1016 19:43:25.936766  484119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1016 19:43:25.936784  484119 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-751669 && echo "embed-certs-751669" | sudo tee /etc/hostname
	I1016 19:43:26.135873  484119 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-751669
	
	I1016 19:43:26.135996  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:26.178885  484119 main.go:141] libmachine: Using SSH client type: native
	I1016 19:43:26.179216  484119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1016 19:43:26.179236  484119 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-751669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-751669/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-751669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 19:43:26.357188  484119 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 19:43:26.357274  484119 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 19:43:26.357323  484119 ubuntu.go:190] setting up certificates
	I1016 19:43:26.357379  484119 provision.go:84] configureAuth start
	I1016 19:43:26.357464  484119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-751669
	I1016 19:43:26.377614  484119 provision.go:143] copyHostCerts
	I1016 19:43:26.377676  484119 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 19:43:26.377693  484119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 19:43:26.377762  484119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 19:43:26.377858  484119 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 19:43:26.377863  484119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 19:43:26.377888  484119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 19:43:26.377948  484119 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 19:43:26.377953  484119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 19:43:26.377975  484119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 19:43:26.378026  484119 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.embed-certs-751669 san=[127.0.0.1 192.168.85.2 embed-certs-751669 localhost minikube]
	I1016 19:43:26.763290  484119 provision.go:177] copyRemoteCerts
	I1016 19:43:26.763402  484119 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 19:43:26.763474  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:26.782648  484119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:43:26.892411  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 19:43:26.925051  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	W1016 19:43:23.984843  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	W1016 19:43:26.491329  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	I1016 19:43:26.958416  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1016 19:43:26.988815  484119 provision.go:87] duration metric: took 631.393907ms to configureAuth
	I1016 19:43:26.988839  484119 ubuntu.go:206] setting minikube options for container-runtime
	I1016 19:43:26.989019  484119 config.go:182] Loaded profile config "embed-certs-751669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:43:26.989115  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:27.014604  484119 main.go:141] libmachine: Using SSH client type: native
	I1016 19:43:27.014911  484119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1016 19:43:27.014925  484119 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 19:43:27.483967  484119 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 19:43:27.484034  484119 machine.go:96] duration metric: took 4.781187278s to provisionDockerMachine
	I1016 19:43:27.484059  484119 start.go:293] postStartSetup for "embed-certs-751669" (driver="docker")
	I1016 19:43:27.484087  484119 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 19:43:27.484182  484119 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 19:43:27.484249  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:27.515552  484119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:43:27.639236  484119 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 19:43:27.643379  484119 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 19:43:27.643406  484119 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 19:43:27.643418  484119 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 19:43:27.643472  484119 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 19:43:27.643550  484119 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 19:43:27.643651  484119 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 19:43:27.657041  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:43:27.692832  484119 start.go:296] duration metric: took 208.740895ms for postStartSetup
	I1016 19:43:27.692951  484119 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 19:43:27.693031  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:27.724532  484119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:43:27.834593  484119 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 19:43:27.839900  484119 fix.go:56] duration metric: took 5.571550145s for fixHost
	I1016 19:43:27.839922  484119 start.go:83] releasing machines lock for "embed-certs-751669", held for 5.571595495s
	I1016 19:43:27.840010  484119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-751669
	I1016 19:43:27.861385  484119 ssh_runner.go:195] Run: cat /version.json
	I1016 19:43:27.861443  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:27.861742  484119 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 19:43:27.861793  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:27.908387  484119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:43:27.913577  484119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:43:28.118781  484119 ssh_runner.go:195] Run: systemctl --version
	I1016 19:43:28.127661  484119 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 19:43:28.225524  484119 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 19:43:28.236718  484119 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 19:43:28.236802  484119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 19:43:28.250387  484119 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 19:43:28.250415  484119 start.go:495] detecting cgroup driver to use...
	I1016 19:43:28.250450  484119 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 19:43:28.250512  484119 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 19:43:28.277210  484119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 19:43:28.299739  484119 docker.go:218] disabling cri-docker service (if available) ...
	I1016 19:43:28.299810  484119 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 19:43:28.323166  484119 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 19:43:28.345579  484119 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 19:43:28.546300  484119 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 19:43:28.709183  484119 docker.go:234] disabling docker service ...
	I1016 19:43:28.709285  484119 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 19:43:28.725319  484119 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 19:43:28.742720  484119 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 19:43:28.904465  484119 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 19:43:29.081982  484119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 19:43:29.100132  484119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 19:43:29.118517  484119 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 19:43:29.118631  484119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:43:29.134467  484119 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 19:43:29.134564  484119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:43:29.143508  484119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:43:29.152633  484119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:43:29.164229  484119 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 19:43:29.174601  484119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:43:29.185294  484119 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:43:29.199713  484119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:43:29.211999  484119 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 19:43:29.224909  484119 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 19:43:29.233377  484119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:43:29.388952  484119 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 19:43:29.717014  484119 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 19:43:29.717172  484119 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 19:43:29.721781  484119 start.go:563] Will wait 60s for crictl version
	I1016 19:43:29.721900  484119 ssh_runner.go:195] Run: which crictl
	I1016 19:43:29.725833  484119 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 19:43:29.770706  484119 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 19:43:29.770865  484119 ssh_runner.go:195] Run: crio --version
	I1016 19:43:29.810901  484119 ssh_runner.go:195] Run: crio --version
	I1016 19:43:29.855938  484119 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 19:43:29.859281  484119 cli_runner.go:164] Run: docker network inspect embed-certs-751669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:43:29.880856  484119 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1016 19:43:29.885451  484119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:43:29.898805  484119 kubeadm.go:883] updating cluster {Name:embed-certs-751669 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-751669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 19:43:29.898923  484119 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:43:29.898981  484119 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:43:29.945632  484119 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:43:29.945653  484119 crio.go:433] Images already preloaded, skipping extraction
	I1016 19:43:29.945711  484119 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:43:29.995510  484119 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:43:29.995584  484119 cache_images.go:85] Images are preloaded, skipping loading
	I1016 19:43:29.995606  484119 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1016 19:43:29.995750  484119 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-751669 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-751669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 19:43:29.995865  484119 ssh_runner.go:195] Run: crio config
	I1016 19:43:30.107419  484119 cni.go:84] Creating CNI manager for ""
	I1016 19:43:30.107443  484119 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:43:30.107504  484119 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 19:43:30.107589  484119 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-751669 NodeName:embed-certs-751669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 19:43:30.107775  484119 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-751669"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 19:43:30.107878  484119 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 19:43:30.119319  484119 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 19:43:30.119397  484119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 19:43:30.131865  484119 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1016 19:43:30.156571  484119 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 19:43:30.176549  484119 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1016 19:43:30.194149  484119 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1016 19:43:30.199230  484119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:43:30.210357  484119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:43:30.371318  484119 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:43:30.390496  484119 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669 for IP: 192.168.85.2
	I1016 19:43:30.390563  484119 certs.go:195] generating shared ca certs ...
	I1016 19:43:30.390593  484119 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:43:30.390791  484119 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 19:43:30.390883  484119 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 19:43:30.390906  484119 certs.go:257] generating profile certs ...
	I1016 19:43:30.391035  484119 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/client.key
	I1016 19:43:30.391137  484119 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/apiserver.key.98c460c4
	I1016 19:43:30.391226  484119 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/proxy-client.key
	I1016 19:43:30.391433  484119 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 19:43:30.391511  484119 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 19:43:30.391536  484119 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 19:43:30.391594  484119 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 19:43:30.391636  484119 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 19:43:30.391695  484119 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 19:43:30.391770  484119 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:43:30.392673  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 19:43:30.425130  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 19:43:30.499911  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 19:43:30.584723  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 19:43:30.671903  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1016 19:43:30.733914  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1016 19:43:30.790730  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 19:43:30.814637  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/embed-certs-751669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 19:43:30.840412  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 19:43:30.868409  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 19:43:30.892479  484119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 19:43:30.921100  484119 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 19:43:30.942883  484119 ssh_runner.go:195] Run: openssl version
	I1016 19:43:30.952475  484119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 19:43:30.964638  484119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 19:43:30.968748  484119 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 19:43:30.968880  484119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 19:43:31.025265  484119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 19:43:31.036191  484119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 19:43:31.046843  484119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 19:43:31.051475  484119 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 19:43:31.051574  484119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 19:43:31.113280  484119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 19:43:31.137527  484119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 19:43:31.153815  484119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:43:31.159708  484119 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:43:31.159840  484119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:43:31.242568  484119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 19:43:31.253722  484119 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 19:43:31.258751  484119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 19:43:31.340961  484119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 19:43:31.432585  484119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 19:43:31.561794  484119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 19:43:31.648725  484119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 19:43:31.707918  484119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 19:43:31.809654  484119 kubeadm.go:400] StartCluster: {Name:embed-certs-751669 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-751669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:43:31.809807  484119 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 19:43:31.809907  484119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 19:43:31.866627  484119 cri.go:89] found id: "8a2a4e8f60de83dc93958769d40834e8c6e8098a4d24326639566a8eb761d219"
	I1016 19:43:31.866700  484119 cri.go:89] found id: "cdb6c8787e86665ba81ed5e2b63948fa8bd322ac9fe2eeaabc3de67e2ae1762a"
	I1016 19:43:31.866719  484119 cri.go:89] found id: "2368c8473fac0e17d1c889c89f8bd36e68e1075d0382ddf4f2ad6c01dcf5819f"
	I1016 19:43:31.866750  484119 cri.go:89] found id: "01a051b12eaa75566bd0ed32bda2684f339c52afc7b5e80f79acc29785a0fe59"
	I1016 19:43:31.866779  484119 cri.go:89] found id: ""
	I1016 19:43:31.866854  484119 ssh_runner.go:195] Run: sudo runc list -f json
	W1016 19:43:31.887219  484119 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:43:31Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:43:31.887342  484119 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 19:43:31.913062  484119 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 19:43:31.913146  484119 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 19:43:31.913227  484119 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 19:43:31.930933  484119 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 19:43:31.931630  484119 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-751669" does not appear in /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:43:31.931948  484119 kubeconfig.go:62] /home/jenkins/minikube-integration/21738-288457/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-751669" cluster setting kubeconfig missing "embed-certs-751669" context setting]
	I1016 19:43:31.932451  484119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:43:31.934318  484119 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 19:43:31.947399  484119 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1016 19:43:31.947481  484119 kubeadm.go:601] duration metric: took 34.315434ms to restartPrimaryControlPlane
	I1016 19:43:31.947505  484119 kubeadm.go:402] duration metric: took 137.868707ms to StartCluster
	I1016 19:43:31.947542  484119 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:43:31.947635  484119 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:43:31.949878  484119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:43:31.950419  484119 config.go:182] Loaded profile config "embed-certs-751669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:43:31.950526  484119 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 19:43:31.950602  484119 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-751669"
	I1016 19:43:31.950623  484119 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-751669"
	W1016 19:43:31.950629  484119 addons.go:247] addon storage-provisioner should already be in state true
	I1016 19:43:31.950650  484119 host.go:66] Checking if "embed-certs-751669" exists ...
	I1016 19:43:31.951190  484119 cli_runner.go:164] Run: docker container inspect embed-certs-751669 --format={{.State.Status}}
	I1016 19:43:31.951357  484119 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:43:31.951747  484119 addons.go:69] Setting dashboard=true in profile "embed-certs-751669"
	I1016 19:43:31.951801  484119 addons.go:238] Setting addon dashboard=true in "embed-certs-751669"
	W1016 19:43:31.951822  484119 addons.go:247] addon dashboard should already be in state true
	I1016 19:43:31.951897  484119 host.go:66] Checking if "embed-certs-751669" exists ...
	I1016 19:43:31.952508  484119 cli_runner.go:164] Run: docker container inspect embed-certs-751669 --format={{.State.Status}}
	I1016 19:43:31.952723  484119 addons.go:69] Setting default-storageclass=true in profile "embed-certs-751669"
	I1016 19:43:31.952772  484119 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-751669"
	I1016 19:43:31.953070  484119 cli_runner.go:164] Run: docker container inspect embed-certs-751669 --format={{.State.Status}}
	I1016 19:43:31.957445  484119 out.go:179] * Verifying Kubernetes components...
	I1016 19:43:31.963507  484119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:43:32.004694  484119 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 19:43:32.007847  484119 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:43:32.007882  484119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 19:43:32.007971  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:32.019823  484119 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1016 19:43:32.022680  484119 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1016 19:43:28.987082  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	W1016 19:43:31.484120  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	I1016 19:43:32.023470  484119 addons.go:238] Setting addon default-storageclass=true in "embed-certs-751669"
	W1016 19:43:32.023490  484119 addons.go:247] addon default-storageclass should already be in state true
	I1016 19:43:32.023515  484119 host.go:66] Checking if "embed-certs-751669" exists ...
	I1016 19:43:32.023967  484119 cli_runner.go:164] Run: docker container inspect embed-certs-751669 --format={{.State.Status}}
	I1016 19:43:32.026027  484119 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1016 19:43:32.026051  484119 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1016 19:43:32.026115  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:32.088630  484119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:43:32.090099  484119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:43:32.104490  484119 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 19:43:32.104516  484119 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 19:43:32.104585  484119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:43:32.129391  484119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:43:32.344352  484119 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:43:32.344848  484119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 19:43:32.352130  484119 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1016 19:43:32.352152  484119 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1016 19:43:32.403361  484119 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1016 19:43:32.403393  484119 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1016 19:43:32.407893  484119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:43:32.430310  484119 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1016 19:43:32.430335  484119 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1016 19:43:32.458785  484119 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1016 19:43:32.458872  484119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1016 19:43:32.507074  484119 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1016 19:43:32.507160  484119 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1016 19:43:32.575727  484119 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1016 19:43:32.575811  484119 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1016 19:43:32.594560  484119 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1016 19:43:32.594632  484119 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1016 19:43:32.609515  484119 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1016 19:43:32.609586  484119 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1016 19:43:32.627669  484119 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1016 19:43:32.627742  484119 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1016 19:43:32.654329  484119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1016 19:43:33.983653  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	W1016 19:43:35.985264  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	I1016 19:43:38.246686  484119 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.901795355s)
	I1016 19:43:38.246789  484119 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.902411666s)
	I1016 19:43:38.247160  484119 node_ready.go:35] waiting up to 6m0s for node "embed-certs-751669" to be "Ready" ...
	I1016 19:43:38.344708  484119 node_ready.go:49] node "embed-certs-751669" is "Ready"
	I1016 19:43:38.344737  484119 node_ready.go:38] duration metric: took 97.560816ms for node "embed-certs-751669" to be "Ready" ...
	I1016 19:43:38.344751  484119 api_server.go:52] waiting for apiserver process to appear ...
	I1016 19:43:38.344809  484119 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 19:43:39.494887  484119 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.086960972s)
	I1016 19:43:39.495017  484119 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.840606973s)
	I1016 19:43:39.495207  484119 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.150381429s)
	I1016 19:43:39.495224  484119 api_server.go:72] duration metric: took 7.543826369s to wait for apiserver process to appear ...
	I1016 19:43:39.495230  484119 api_server.go:88] waiting for apiserver healthz status ...
	I1016 19:43:39.495247  484119 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:43:39.498524  484119 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-751669 addons enable metrics-server
	
	I1016 19:43:39.501534  484119 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1016 19:43:39.504061  484119 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 19:43:39.504135  484119 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 19:43:39.505352  484119 addons.go:514] duration metric: took 7.554826197s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1016 19:43:39.995395  484119 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:43:40.012531  484119 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1016 19:43:40.016514  484119 api_server.go:141] control plane version: v1.34.1
	I1016 19:43:40.016602  484119 api_server.go:131] duration metric: took 521.363778ms to wait for apiserver health ...
	I1016 19:43:40.016629  484119 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 19:43:40.024388  484119 system_pods.go:59] 8 kube-system pods found
	I1016 19:43:40.024484  484119 system_pods.go:61] "coredns-66bc5c9577-2h6z6" [af34943c-9e1b-4fae-a8b8-815874618d70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:43:40.024519  484119 system_pods.go:61] "etcd-embed-certs-751669" [37b3cc63-0b45-4c80-ae4b-a06c4869d837] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 19:43:40.024544  484119 system_pods.go:61] "kindnet-cjx87" [95baa320-d051-4ea0-907e-d603971eb05a] Running
	I1016 19:43:40.024570  484119 system_pods.go:61] "kube-apiserver-embed-certs-751669" [d831a0cf-77fe-4e1c-b8b3-ee99ad90700b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 19:43:40.024604  484119 system_pods.go:61] "kube-controller-manager-embed-certs-751669" [833e0630-a2ac-4861-8b1c-ed28a314d799] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 19:43:40.024629  484119 system_pods.go:61] "kube-proxy-lvmlh" [6b56d13a-ca45-4f0d-92df-db96025be2e4] Running
	I1016 19:43:40.024655  484119 system_pods.go:61] "kube-scheduler-embed-certs-751669" [eeed62cd-46a6-4ec3-8e39-4f27a264982e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 19:43:40.024689  484119 system_pods.go:61] "storage-provisioner" [139c88ca-0616-415b-91d4-03e93ae02f70] Running
	I1016 19:43:40.024715  484119 system_pods.go:74] duration metric: took 8.064856ms to wait for pod list to return data ...
	I1016 19:43:40.024739  484119 default_sa.go:34] waiting for default service account to be created ...
	I1016 19:43:40.028445  484119 default_sa.go:45] found service account: "default"
	I1016 19:43:40.028525  484119 default_sa.go:55] duration metric: took 3.760089ms for default service account to be created ...
	I1016 19:43:40.028553  484119 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 19:43:40.032930  484119 system_pods.go:86] 8 kube-system pods found
	I1016 19:43:40.033011  484119 system_pods.go:89] "coredns-66bc5c9577-2h6z6" [af34943c-9e1b-4fae-a8b8-815874618d70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:43:40.033036  484119 system_pods.go:89] "etcd-embed-certs-751669" [37b3cc63-0b45-4c80-ae4b-a06c4869d837] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 19:43:40.033059  484119 system_pods.go:89] "kindnet-cjx87" [95baa320-d051-4ea0-907e-d603971eb05a] Running
	I1016 19:43:40.033081  484119 system_pods.go:89] "kube-apiserver-embed-certs-751669" [d831a0cf-77fe-4e1c-b8b3-ee99ad90700b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 19:43:40.033103  484119 system_pods.go:89] "kube-controller-manager-embed-certs-751669" [833e0630-a2ac-4861-8b1c-ed28a314d799] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 19:43:40.033125  484119 system_pods.go:89] "kube-proxy-lvmlh" [6b56d13a-ca45-4f0d-92df-db96025be2e4] Running
	I1016 19:43:40.033167  484119 system_pods.go:89] "kube-scheduler-embed-certs-751669" [eeed62cd-46a6-4ec3-8e39-4f27a264982e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 19:43:40.033188  484119 system_pods.go:89] "storage-provisioner" [139c88ca-0616-415b-91d4-03e93ae02f70] Running
	I1016 19:43:40.033212  484119 system_pods.go:126] duration metric: took 4.641099ms to wait for k8s-apps to be running ...
	I1016 19:43:40.033235  484119 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 19:43:40.033329  484119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:43:40.053917  484119 system_svc.go:56] duration metric: took 20.662994ms WaitForService to wait for kubelet
	I1016 19:43:40.053998  484119 kubeadm.go:586] duration metric: took 8.102588062s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:43:40.054033  484119 node_conditions.go:102] verifying NodePressure condition ...
	I1016 19:43:40.059362  484119 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 19:43:40.059449  484119 node_conditions.go:123] node cpu capacity is 2
	I1016 19:43:40.059479  484119 node_conditions.go:105] duration metric: took 5.414432ms to run NodePressure ...
	I1016 19:43:40.059505  484119 start.go:241] waiting for startup goroutines ...
	I1016 19:43:40.059545  484119 start.go:246] waiting for cluster config update ...
	I1016 19:43:40.059570  484119 start.go:255] writing updated cluster config ...
	I1016 19:43:40.059953  484119 ssh_runner.go:195] Run: rm -f paused
	I1016 19:43:40.064592  484119 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:43:40.069615  484119 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2h6z6" in "kube-system" namespace to be "Ready" or be gone ...
	W1016 19:43:38.485739  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	W1016 19:43:40.983982  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	W1016 19:43:42.084369  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	W1016 19:43:44.575907  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	W1016 19:43:46.577259  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	W1016 19:43:42.984333  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	W1016 19:43:45.484907  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	W1016 19:43:48.577842  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	W1016 19:43:50.583583  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	W1016 19:43:47.985123  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	W1016 19:43:50.490408  481534 pod_ready.go:104] pod "coredns-66bc5c9577-jr55z" is not "Ready", error: <nil>
	I1016 19:43:51.983704  481534 pod_ready.go:94] pod "coredns-66bc5c9577-jr55z" is "Ready"
	I1016 19:43:51.983731  481534 pod_ready.go:86] duration metric: took 35.005287371s for pod "coredns-66bc5c9577-jr55z" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:51.986571  481534 pod_ready.go:83] waiting for pod "etcd-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:51.991448  481534 pod_ready.go:94] pod "etcd-no-preload-225696" is "Ready"
	I1016 19:43:51.991479  481534 pod_ready.go:86] duration metric: took 4.882726ms for pod "etcd-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:51.993927  481534 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:51.999836  481534 pod_ready.go:94] pod "kube-apiserver-no-preload-225696" is "Ready"
	I1016 19:43:51.999867  481534 pod_ready.go:86] duration metric: took 5.912079ms for pod "kube-apiserver-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:52.014951  481534 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:52.181561  481534 pod_ready.go:94] pod "kube-controller-manager-no-preload-225696" is "Ready"
	I1016 19:43:52.181590  481534 pod_ready.go:86] duration metric: took 166.609633ms for pod "kube-controller-manager-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:52.381611  481534 pod_ready.go:83] waiting for pod "kube-proxy-m86rv" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:52.781846  481534 pod_ready.go:94] pod "kube-proxy-m86rv" is "Ready"
	I1016 19:43:52.781873  481534 pod_ready.go:86] duration metric: took 400.235214ms for pod "kube-proxy-m86rv" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:52.983109  481534 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:53.382239  481534 pod_ready.go:94] pod "kube-scheduler-no-preload-225696" is "Ready"
	I1016 19:43:53.382268  481534 pod_ready.go:86] duration metric: took 399.132362ms for pod "kube-scheduler-no-preload-225696" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:43:53.382280  481534 pod_ready.go:40] duration metric: took 36.408094274s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:43:53.436731  481534 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1016 19:43:53.440081  481534 out.go:179] * Done! kubectl is now configured to use "no-preload-225696" cluster and "default" namespace by default
	W1016 19:43:53.075759  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	W1016 19:43:55.575355  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	W1016 19:43:58.075280  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	W1016 19:44:00.100477  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	W1016 19:44:02.576476  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	W1016 19:44:05.079107  484119 pod_ready.go:104] pod "coredns-66bc5c9577-2h6z6" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.564288295Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cc9d3209-d506-460e-9cf8-4f5bb66fd9eb name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.565858502Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9292701e-67d3-4161-aaf2-47000b76ec40 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.567153581Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn/dashboard-metrics-scraper" id=90f5f41f-28c7-4248-9230-028f3c37c92f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.567368385Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.584105858Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.584676252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.608458067Z" level=info msg="Created container 1f991b7f7f42165c9ce22614ac2f32519f7d9551f623c3c068b920302279e3d0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn/dashboard-metrics-scraper" id=90f5f41f-28c7-4248-9230-028f3c37c92f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.609690104Z" level=info msg="Starting container: 1f991b7f7f42165c9ce22614ac2f32519f7d9551f623c3c068b920302279e3d0" id=31845089-ccc7-49c5-8cc4-0dfcbf495d9a name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.613575151Z" level=info msg="Started container" PID=1635 containerID=1f991b7f7f42165c9ce22614ac2f32519f7d9551f623c3c068b920302279e3d0 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn/dashboard-metrics-scraper id=31845089-ccc7-49c5-8cc4-0dfcbf495d9a name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b163066cd8c92c93bca83ce76d361b25618eecf31f643b56c7f368294a7088a
	Oct 16 19:43:50 no-preload-225696 conmon[1633]: conmon 1f991b7f7f42165c9ce2 <ninfo>: container 1635 exited with status 1
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.817752317Z" level=info msg="Removing container: f140fdd3dab2e6d49e4f7c00ef0e58c5f29eba3af5d217dc402533cee1bbbced" id=538894d2-f421-49d2-925c-7dabcf8b0010 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.827858746Z" level=info msg="Error loading conmon cgroup of container f140fdd3dab2e6d49e4f7c00ef0e58c5f29eba3af5d217dc402533cee1bbbced: cgroup deleted" id=538894d2-f421-49d2-925c-7dabcf8b0010 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:43:50 no-preload-225696 crio[651]: time="2025-10-16T19:43:50.838280337Z" level=info msg="Removed container f140fdd3dab2e6d49e4f7c00ef0e58c5f29eba3af5d217dc402533cee1bbbced: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn/dashboard-metrics-scraper" id=538894d2-f421-49d2-925c-7dabcf8b0010 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.608240132Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.615443425Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.61547938Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.61550178Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.618703348Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.618736727Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.618761171Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.622184149Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.622335362Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.622571623Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.626859612Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:43:56 no-preload-225696 crio[651]: time="2025-10-16T19:43:56.626892309Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	1f991b7f7f421       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   3b163066cd8c9       dashboard-metrics-scraper-6ffb444bf9-4xtkn   kubernetes-dashboard
	61e433028e9f5       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           23 seconds ago      Running             storage-provisioner         2                   f1f2fea3bb0cc       storage-provisioner                          kube-system
	825fe7e210b26       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago      Running             kubernetes-dashboard        0                   7e473745e6c21       kubernetes-dashboard-855c9754f9-d6pcj        kubernetes-dashboard
	36e68cde0cbee       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago      Running             coredns                     1                   5d993905d1f0d       coredns-66bc5c9577-jr55z                     kube-system
	3692cc5de998b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago      Running             kindnet-cni                 1                   bd2f1c31274b1       kindnet-kfg52                                kube-system
	574d84457d8bf       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago      Running             busybox                     1                   85dacd4bbaa25       busybox                                      default
	e3b885bb4fb97       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago      Running             kube-proxy                  1                   92064e38d1c3e       kube-proxy-m86rv                             kube-system
	41d9ccf1929d9       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           54 seconds ago      Exited              storage-provisioner         1                   f1f2fea3bb0cc       storage-provisioner                          kube-system
	7300b15e4085a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   e6c40dc4677fc       kube-apiserver-no-preload-225696             kube-system
	54c3315a98e54       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   2ae11930d2c32       kube-controller-manager-no-preload-225696    kube-system
	3ba8ff04c879c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   0db94b543a1da       kube-scheduler-no-preload-225696             kube-system
	948a539396c16       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   f9950375cf903       etcd-no-preload-225696                       kube-system
	
	
	==> coredns [36e68cde0cbee113287402c3971f42685f0998bc56e6d7f67c52fb9aeb37e79f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46376 - 20167 "HINFO IN 5168467559581767183.966991120985240760. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.007628714s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-225696
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-225696
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=no-preload-225696
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T19_42_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 19:42:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-225696
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:43:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:43:46 +0000   Thu, 16 Oct 2025 19:42:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:43:46 +0000   Thu, 16 Oct 2025 19:42:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:43:46 +0000   Thu, 16 Oct 2025 19:42:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 19:43:46 +0000   Thu, 16 Oct 2025 19:42:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-225696
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                7c11c781-d716-4555-8158-86dd5d9b993e
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-jr55z                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-no-preload-225696                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         115s
	  kube-system                 kindnet-kfg52                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-225696              250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-225696     200m (10%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-m86rv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-225696              100m (5%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4xtkn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-d6pcj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 109s                 kube-proxy       
	  Normal   Starting                 53s                  kube-proxy       
	  Normal   Starting                 2m4s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m4s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m4s (x4 over 2m4s)  kubelet          Node no-preload-225696 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m4s (x4 over 2m4s)  kubelet          Node no-preload-225696 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m4s (x4 over 2m4s)  kubelet          Node no-preload-225696 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    115s                 kubelet          Node no-preload-225696 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 115s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  115s                 kubelet          Node no-preload-225696 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     115s                 kubelet          Node no-preload-225696 status is now: NodeHasSufficientPID
	  Normal   Starting                 115s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           111s                 node-controller  Node no-preload-225696 event: Registered Node no-preload-225696 in Controller
	  Normal   NodeReady                96s                  kubelet          Node no-preload-225696 status is now: NodeReady
	  Normal   Starting                 60s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 60s)    kubelet          Node no-preload-225696 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 60s)    kubelet          Node no-preload-225696 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 60s)    kubelet          Node no-preload-225696 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                  node-controller  Node no-preload-225696 event: Registered Node no-preload-225696 in Controller
	
	
	==> dmesg <==
	[Oct16 19:19] overlayfs: idmapped layers are currently not supported
	[Oct16 19:20] overlayfs: idmapped layers are currently not supported
	[Oct16 19:21] overlayfs: idmapped layers are currently not supported
	[Oct16 19:22] overlayfs: idmapped layers are currently not supported
	[  +5.025487] overlayfs: idmapped layers are currently not supported
	[Oct16 19:23] overlayfs: idmapped layers are currently not supported
	[ +28.397927] overlayfs: idmapped layers are currently not supported
	[Oct16 19:24] overlayfs: idmapped layers are currently not supported
	[ +25.533019] overlayfs: idmapped layers are currently not supported
	[Oct16 19:26] overlayfs: idmapped layers are currently not supported
	[Oct16 19:27] overlayfs: idmapped layers are currently not supported
	[Oct16 19:29] overlayfs: idmapped layers are currently not supported
	[Oct16 19:31] overlayfs: idmapped layers are currently not supported
	[Oct16 19:32] overlayfs: idmapped layers are currently not supported
	[Oct16 19:34] overlayfs: idmapped layers are currently not supported
	[Oct16 19:36] overlayfs: idmapped layers are currently not supported
	[Oct16 19:37] overlayfs: idmapped layers are currently not supported
	[  +8.490329] overlayfs: idmapped layers are currently not supported
	[Oct16 19:38] overlayfs: idmapped layers are currently not supported
	[Oct16 19:39] overlayfs: idmapped layers are currently not supported
	[Oct16 19:40] overlayfs: idmapped layers are currently not supported
	[Oct16 19:41] overlayfs: idmapped layers are currently not supported
	[ +20.605853] overlayfs: idmapped layers are currently not supported
	[Oct16 19:43] overlayfs: idmapped layers are currently not supported
	[ +20.110477] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [948a539396c168da2900996f537d4295485126181c9390e8ecf95665342f725d] <==
	{"level":"warn","ts":"2025-10-16T19:43:13.262357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.303156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.303414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.332344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.352027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.372155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.404884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.411227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.423788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.452172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.472110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.504002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.527818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.549211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.567383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.583440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.605525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.621232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.643893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.659425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.678665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.711306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.731126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.792290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:13.889417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44764","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:44:10 up  2:26,  0 user,  load average: 3.45, 3.58, 2.99
	Linux no-preload-225696 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3692cc5de998b90aae84a96921c2274a4037e62497227812a010c277bf893a25] <==
	I1016 19:43:16.327200       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 19:43:16.409899       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1016 19:43:16.410165       1 main.go:148] setting mtu 1500 for CNI 
	I1016 19:43:16.410219       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 19:43:16.410259       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T19:43:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 19:43:16.607063       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 19:43:16.607166       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 19:43:16.607202       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 19:43:16.607699       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1016 19:43:46.608100       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1016 19:43:46.608243       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1016 19:43:46.608337       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1016 19:43:46.609590       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1016 19:43:48.107455       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 19:43:48.107559       1 metrics.go:72] Registering metrics
	I1016 19:43:48.107643       1 controller.go:711] "Syncing nftables rules"
	I1016 19:43:56.607315       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:43:56.607975       1 main.go:301] handling current node
	I1016 19:44:06.615094       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:44:06.615128       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7300b15e4085a66cb68787117e92bb710eb0d1215ec993db5fb84c3d949130d8] <==
	I1016 19:43:15.348587       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1016 19:43:15.355624       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1016 19:43:15.355656       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1016 19:43:15.380954       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1016 19:43:15.383219       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1016 19:43:15.383252       1 policy_source.go:240] refreshing policies
	I1016 19:43:15.383312       1 cache.go:39] Caches are synced for autoregister controller
	I1016 19:43:15.387316       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1016 19:43:15.387371       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1016 19:43:15.397239       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 19:43:15.403731       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 19:43:15.406993       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1016 19:43:15.416374       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1016 19:43:15.418217       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1016 19:43:15.631977       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 19:43:15.805487       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 19:43:15.975469       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 19:43:16.131313       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 19:43:16.257805       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 19:43:16.291004       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 19:43:16.450835       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.107.60"}
	I1016 19:43:16.475442       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.236.223"}
	I1016 19:43:18.799522       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 19:43:19.200750       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 19:43:19.248626       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [54c3315a98e54e9dea40491fb54e4522a7a4b2f2741c1db37a3baf94aa4ca7fe] <==
	I1016 19:43:18.793553       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1016 19:43:18.793876       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 19:43:18.796733       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1016 19:43:18.797067       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1016 19:43:18.800382       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1016 19:43:18.801638       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:43:18.817618       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 19:43:18.823943       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:43:18.828123       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:43:18.829255       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1016 19:43:18.831486       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1016 19:43:18.834851       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:43:18.834880       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 19:43:18.834888       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 19:43:18.842028       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1016 19:43:18.842220       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1016 19:43:18.842250       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 19:43:18.842633       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1016 19:43:18.846056       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1016 19:43:18.846096       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1016 19:43:18.846159       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1016 19:43:18.846193       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1016 19:43:18.846225       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-225696"
	I1016 19:43:18.846273       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1016 19:43:18.856038       1 shared_informer.go:356] "Caches are synced" controller="GC"
	
	
	==> kube-proxy [e3b885bb4fb971bce2efdf7f5ef86bd41c06a2df486460d3723e0cafcf13050c] <==
	I1016 19:43:16.761922       1 server_linux.go:53] "Using iptables proxy"
	I1016 19:43:16.931015       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 19:43:17.031686       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 19:43:17.031802       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1016 19:43:17.031901       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 19:43:17.062859       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 19:43:17.062979       1 server_linux.go:132] "Using iptables Proxier"
	I1016 19:43:17.067110       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 19:43:17.067457       1 server.go:527] "Version info" version="v1.34.1"
	I1016 19:43:17.067647       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:43:17.069007       1 config.go:200] "Starting service config controller"
	I1016 19:43:17.069072       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 19:43:17.069116       1 config.go:106] "Starting endpoint slice config controller"
	I1016 19:43:17.069341       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 19:43:17.069416       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 19:43:17.069445       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 19:43:17.070122       1 config.go:309] "Starting node config controller"
	I1016 19:43:17.073057       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 19:43:17.073167       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 19:43:17.169661       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 19:43:17.169703       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 19:43:17.169750       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3ba8ff04c879c0b8622800d55c14e4e53ce7edc4fc8527ba00de12d8cf1436a8] <==
	I1016 19:43:14.123229       1 serving.go:386] Generated self-signed cert in-memory
	I1016 19:43:17.205126       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1016 19:43:17.205264       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:43:17.216619       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 19:43:17.216753       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:43:17.218572       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:43:17.216732       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1016 19:43:17.218661       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1016 19:43:17.216764       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:43:17.219007       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:43:17.216777       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 19:43:17.318716       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:43:17.319493       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:43:17.319560       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 16 19:43:19 no-preload-225696 kubelet[768]: I1016 19:43:19.518273     768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4xtkn\" (UID: \"1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn"
	Oct 16 19:43:19 no-preload-225696 kubelet[768]: I1016 19:43:19.518329     768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbld5\" (UniqueName: \"kubernetes.io/projected/1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33-kube-api-access-nbld5\") pod \"dashboard-metrics-scraper-6ffb444bf9-4xtkn\" (UID: \"1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn"
	Oct 16 19:43:19 no-preload-225696 kubelet[768]: W1016 19:43:19.724124     768 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a/crio-7e473745e6c21ce854a519f0507e84760bd7b48e03749910e29176ede968641a WatchSource:0}: Error finding container 7e473745e6c21ce854a519f0507e84760bd7b48e03749910e29176ede968641a: Status 404 returned error can't find the container with id 7e473745e6c21ce854a519f0507e84760bd7b48e03749910e29176ede968641a
	Oct 16 19:43:19 no-preload-225696 kubelet[768]: W1016 19:43:19.740628     768 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/67fd0d064b81041655343dc9b0e5128ee79c40e7b8c4ce9723aab7c23b0dea5a/crio-3b163066cd8c92c93bca83ce76d361b25618eecf31f643b56c7f368294a7088a WatchSource:0}: Error finding container 3b163066cd8c92c93bca83ce76d361b25618eecf31f643b56c7f368294a7088a: Status 404 returned error can't find the container with id 3b163066cd8c92c93bca83ce76d361b25618eecf31f643b56c7f368294a7088a
	Oct 16 19:43:21 no-preload-225696 kubelet[768]: I1016 19:43:21.675550     768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 16 19:43:30 no-preload-225696 kubelet[768]: I1016 19:43:30.110050     768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-d6pcj" podStartSLOduration=5.260268199 podStartE2EDuration="11.109735855s" podCreationTimestamp="2025-10-16 19:43:19 +0000 UTC" firstStartedPulling="2025-10-16 19:43:19.728744232 +0000 UTC m=+9.334207639" lastFinishedPulling="2025-10-16 19:43:25.578211889 +0000 UTC m=+15.183675295" observedRunningTime="2025-10-16 19:43:25.720819607 +0000 UTC m=+15.326283022" watchObservedRunningTime="2025-10-16 19:43:30.109735855 +0000 UTC m=+19.715199270"
	Oct 16 19:43:31 no-preload-225696 kubelet[768]: I1016 19:43:31.766952     768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn" podStartSLOduration=0.865429377 podStartE2EDuration="12.766932902s" podCreationTimestamp="2025-10-16 19:43:19 +0000 UTC" firstStartedPulling="2025-10-16 19:43:19.745876946 +0000 UTC m=+9.351340353" lastFinishedPulling="2025-10-16 19:43:31.647380471 +0000 UTC m=+21.252843878" observedRunningTime="2025-10-16 19:43:31.766535088 +0000 UTC m=+21.371998503" watchObservedRunningTime="2025-10-16 19:43:31.766932902 +0000 UTC m=+21.372396374"
	Oct 16 19:43:32 no-preload-225696 kubelet[768]: I1016 19:43:32.750023     768 scope.go:117] "RemoveContainer" containerID="4e0d282a3f82cfa941e8e464ba009fd304cdf2b8ab2058c46497336badbc3818"
	Oct 16 19:43:33 no-preload-225696 kubelet[768]: I1016 19:43:33.753982     768 scope.go:117] "RemoveContainer" containerID="4e0d282a3f82cfa941e8e464ba009fd304cdf2b8ab2058c46497336badbc3818"
	Oct 16 19:43:33 no-preload-225696 kubelet[768]: I1016 19:43:33.754281     768 scope.go:117] "RemoveContainer" containerID="f140fdd3dab2e6d49e4f7c00ef0e58c5f29eba3af5d217dc402533cee1bbbced"
	Oct 16 19:43:33 no-preload-225696 kubelet[768]: E1016 19:43:33.754436     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4xtkn_kubernetes-dashboard(1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn" podUID="1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33"
	Oct 16 19:43:34 no-preload-225696 kubelet[768]: I1016 19:43:34.758231     768 scope.go:117] "RemoveContainer" containerID="f140fdd3dab2e6d49e4f7c00ef0e58c5f29eba3af5d217dc402533cee1bbbced"
	Oct 16 19:43:34 no-preload-225696 kubelet[768]: E1016 19:43:34.758401     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4xtkn_kubernetes-dashboard(1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn" podUID="1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33"
	Oct 16 19:43:36 no-preload-225696 kubelet[768]: I1016 19:43:36.007790     768 scope.go:117] "RemoveContainer" containerID="f140fdd3dab2e6d49e4f7c00ef0e58c5f29eba3af5d217dc402533cee1bbbced"
	Oct 16 19:43:36 no-preload-225696 kubelet[768]: E1016 19:43:36.008576     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4xtkn_kubernetes-dashboard(1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn" podUID="1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33"
	Oct 16 19:43:46 no-preload-225696 kubelet[768]: I1016 19:43:46.791857     768 scope.go:117] "RemoveContainer" containerID="41d9ccf1929d9d999832642ca90ea604512d03a91d987faa66ae896de2f7d34f"
	Oct 16 19:43:50 no-preload-225696 kubelet[768]: I1016 19:43:50.563386     768 scope.go:117] "RemoveContainer" containerID="f140fdd3dab2e6d49e4f7c00ef0e58c5f29eba3af5d217dc402533cee1bbbced"
	Oct 16 19:43:50 no-preload-225696 kubelet[768]: I1016 19:43:50.804987     768 scope.go:117] "RemoveContainer" containerID="f140fdd3dab2e6d49e4f7c00ef0e58c5f29eba3af5d217dc402533cee1bbbced"
	Oct 16 19:43:50 no-preload-225696 kubelet[768]: I1016 19:43:50.805553     768 scope.go:117] "RemoveContainer" containerID="1f991b7f7f42165c9ce22614ac2f32519f7d9551f623c3c068b920302279e3d0"
	Oct 16 19:43:50 no-preload-225696 kubelet[768]: E1016 19:43:50.805713     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4xtkn_kubernetes-dashboard(1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn" podUID="1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33"
	Oct 16 19:43:56 no-preload-225696 kubelet[768]: I1016 19:43:56.008227     768 scope.go:117] "RemoveContainer" containerID="1f991b7f7f42165c9ce22614ac2f32519f7d9551f623c3c068b920302279e3d0"
	Oct 16 19:43:56 no-preload-225696 kubelet[768]: E1016 19:43:56.009025     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4xtkn_kubernetes-dashboard(1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4xtkn" podUID="1448fcf1-9d9f-48a8-8eda-26f1fc5b5c33"
	Oct 16 19:44:05 no-preload-225696 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 19:44:05 no-preload-225696 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 19:44:05 no-preload-225696 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [825fe7e210b26805cdb54da81644fbf342aa5e2833a84251a10b17d560a4d1fd] <==
	2025/10/16 19:43:25 Starting overwatch
	2025/10/16 19:43:25 Using namespace: kubernetes-dashboard
	2025/10/16 19:43:25 Using in-cluster config to connect to apiserver
	2025/10/16 19:43:25 Using secret token for csrf signing
	2025/10/16 19:43:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/16 19:43:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/16 19:43:25 Successful initial request to the apiserver, version: v1.34.1
	2025/10/16 19:43:25 Generating JWE encryption key
	2025/10/16 19:43:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/16 19:43:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/16 19:43:26 Initializing JWE encryption key from synchronized object
	2025/10/16 19:43:26 Creating in-cluster Sidecar client
	2025/10/16 19:43:26 Serving insecurely on HTTP port: 9090
	2025/10/16 19:43:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 19:43:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [41d9ccf1929d9d999832642ca90ea604512d03a91d987faa66ae896de2f7d34f] <==
	I1016 19:43:16.269396       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1016 19:43:46.281797       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [61e433028e9f5d9205876d98bffe8dc107dca16c19f9fc0816fd23296b3d01cd] <==
	I1016 19:43:46.917600       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 19:43:46.934517       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 19:43:46.934575       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1016 19:43:46.950281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:43:50.409033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:43:54.669751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:43:58.267562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:01.321494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:04.343184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:04.350957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 19:44:04.351113       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 19:44:04.351385       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-225696_2f5664f1-5f42-4044-82e1-c351401a5215!
	I1016 19:44:04.352315       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62f60c8b-5f75-4039-9f5b-c9731950c343", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-225696_2f5664f1-5f42-4044-82e1-c351401a5215 became leader
	W1016 19:44:04.361937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:04.365114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 19:44:04.451525       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-225696_2f5664f1-5f42-4044-82e1-c351401a5215!
	W1016 19:44:06.368368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:06.374436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:08.377939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:08.382715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:10.385408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:10.390150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-225696 -n no-preload-225696
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-225696 -n no-preload-225696: exit status 2 (403.250273ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-225696 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-751669 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-751669 --alsologtostderr -v=1: exit status 80 (2.042809101s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-751669 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 19:44:24.311399  489193 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:44:24.311593  489193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:44:24.311626  489193 out.go:374] Setting ErrFile to fd 2...
	I1016 19:44:24.311649  489193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:44:24.311930  489193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:44:24.312252  489193 out.go:368] Setting JSON to false
	I1016 19:44:24.312309  489193 mustload.go:65] Loading cluster: embed-certs-751669
	I1016 19:44:24.312733  489193 config.go:182] Loaded profile config "embed-certs-751669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:44:24.313294  489193 cli_runner.go:164] Run: docker container inspect embed-certs-751669 --format={{.State.Status}}
	I1016 19:44:24.331043  489193 host.go:66] Checking if "embed-certs-751669" exists ...
	I1016 19:44:24.331359  489193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:44:24.393467  489193 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-16 19:44:24.384212045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:44:24.394154  489193 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-751669 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1016 19:44:24.397633  489193 out.go:179] * Pausing node embed-certs-751669 ... 
	I1016 19:44:24.400405  489193 host.go:66] Checking if "embed-certs-751669" exists ...
	I1016 19:44:24.400755  489193 ssh_runner.go:195] Run: systemctl --version
	I1016 19:44:24.400806  489193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-751669
	I1016 19:44:24.420618  489193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/embed-certs-751669/id_rsa Username:docker}
	I1016 19:44:24.523612  489193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:44:24.538714  489193 pause.go:52] kubelet running: true
	I1016 19:44:24.538795  489193 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 19:44:24.812915  489193 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 19:44:24.813016  489193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 19:44:24.902819  489193 cri.go:89] found id: "16a1dc880de0562e7fa670a01682311e2203be018e5a748bbe56cf1c1f6e3e51"
	I1016 19:44:24.902843  489193 cri.go:89] found id: "49a17fa869c65153404159392f64f8f9f559558f7abb64bc7d124d18bcc2597a"
	I1016 19:44:24.902848  489193 cri.go:89] found id: "eadf94f6b8838ac4c1800c834d95152476fd2be03f6ae1e62836a05a0e4e1248"
	I1016 19:44:24.902852  489193 cri.go:89] found id: "d7b2f5278dfd381e2a01706c9311c5dfd9420611b3505d3fe56c9fb7d71a711c"
	I1016 19:44:24.902855  489193 cri.go:89] found id: "4745f67d64b371c5ebd81706d5db08ae45a0bc210dfc3842b0e4dfe1592ae79a"
	I1016 19:44:24.902859  489193 cri.go:89] found id: "8a2a4e8f60de83dc93958769d40834e8c6e8098a4d24326639566a8eb761d219"
	I1016 19:44:24.902862  489193 cri.go:89] found id: "cdb6c8787e86665ba81ed5e2b63948fa8bd322ac9fe2eeaabc3de67e2ae1762a"
	I1016 19:44:24.902868  489193 cri.go:89] found id: "2368c8473fac0e17d1c889c89f8bd36e68e1075d0382ddf4f2ad6c01dcf5819f"
	I1016 19:44:24.902871  489193 cri.go:89] found id: "01a051b12eaa75566bd0ed32bda2684f339c52afc7b5e80f79acc29785a0fe59"
	I1016 19:44:24.902878  489193 cri.go:89] found id: "5c3452355191ade33c815e5b44cedf8fc61d23935ed2003087f7669841b38192"
	I1016 19:44:24.902881  489193 cri.go:89] found id: "e9174935e4924619bbd7b732372997943016a19eebde101e652dde4e3e693e72"
	I1016 19:44:24.902884  489193 cri.go:89] found id: ""
	I1016 19:44:24.902935  489193 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 19:44:24.915641  489193 retry.go:31] will retry after 357.314533ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:44:24Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:44:25.274094  489193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:44:25.288828  489193 pause.go:52] kubelet running: false
	I1016 19:44:25.288900  489193 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 19:44:25.538103  489193 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 19:44:25.538242  489193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 19:44:25.651569  489193 cri.go:89] found id: "16a1dc880de0562e7fa670a01682311e2203be018e5a748bbe56cf1c1f6e3e51"
	I1016 19:44:25.651596  489193 cri.go:89] found id: "49a17fa869c65153404159392f64f8f9f559558f7abb64bc7d124d18bcc2597a"
	I1016 19:44:25.651602  489193 cri.go:89] found id: "eadf94f6b8838ac4c1800c834d95152476fd2be03f6ae1e62836a05a0e4e1248"
	I1016 19:44:25.651607  489193 cri.go:89] found id: "d7b2f5278dfd381e2a01706c9311c5dfd9420611b3505d3fe56c9fb7d71a711c"
	I1016 19:44:25.651611  489193 cri.go:89] found id: "4745f67d64b371c5ebd81706d5db08ae45a0bc210dfc3842b0e4dfe1592ae79a"
	I1016 19:44:25.651649  489193 cri.go:89] found id: "8a2a4e8f60de83dc93958769d40834e8c6e8098a4d24326639566a8eb761d219"
	I1016 19:44:25.651659  489193 cri.go:89] found id: "cdb6c8787e86665ba81ed5e2b63948fa8bd322ac9fe2eeaabc3de67e2ae1762a"
	I1016 19:44:25.651665  489193 cri.go:89] found id: "2368c8473fac0e17d1c889c89f8bd36e68e1075d0382ddf4f2ad6c01dcf5819f"
	I1016 19:44:25.651669  489193 cri.go:89] found id: "01a051b12eaa75566bd0ed32bda2684f339c52afc7b5e80f79acc29785a0fe59"
	I1016 19:44:25.651686  489193 cri.go:89] found id: "5c3452355191ade33c815e5b44cedf8fc61d23935ed2003087f7669841b38192"
	I1016 19:44:25.651714  489193 cri.go:89] found id: "e9174935e4924619bbd7b732372997943016a19eebde101e652dde4e3e693e72"
	I1016 19:44:25.651725  489193 cri.go:89] found id: ""
	I1016 19:44:25.651811  489193 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 19:44:25.667338  489193 retry.go:31] will retry after 275.187566ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:44:25Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:44:25.942746  489193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:44:25.956636  489193 pause.go:52] kubelet running: false
	I1016 19:44:25.956710  489193 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 19:44:26.183158  489193 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 19:44:26.183232  489193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 19:44:26.266426  489193 cri.go:89] found id: "16a1dc880de0562e7fa670a01682311e2203be018e5a748bbe56cf1c1f6e3e51"
	I1016 19:44:26.266450  489193 cri.go:89] found id: "49a17fa869c65153404159392f64f8f9f559558f7abb64bc7d124d18bcc2597a"
	I1016 19:44:26.266455  489193 cri.go:89] found id: "eadf94f6b8838ac4c1800c834d95152476fd2be03f6ae1e62836a05a0e4e1248"
	I1016 19:44:26.266460  489193 cri.go:89] found id: "d7b2f5278dfd381e2a01706c9311c5dfd9420611b3505d3fe56c9fb7d71a711c"
	I1016 19:44:26.266464  489193 cri.go:89] found id: "4745f67d64b371c5ebd81706d5db08ae45a0bc210dfc3842b0e4dfe1592ae79a"
	I1016 19:44:26.266467  489193 cri.go:89] found id: "8a2a4e8f60de83dc93958769d40834e8c6e8098a4d24326639566a8eb761d219"
	I1016 19:44:26.266471  489193 cri.go:89] found id: "cdb6c8787e86665ba81ed5e2b63948fa8bd322ac9fe2eeaabc3de67e2ae1762a"
	I1016 19:44:26.266474  489193 cri.go:89] found id: "2368c8473fac0e17d1c889c89f8bd36e68e1075d0382ddf4f2ad6c01dcf5819f"
	I1016 19:44:26.266477  489193 cri.go:89] found id: "01a051b12eaa75566bd0ed32bda2684f339c52afc7b5e80f79acc29785a0fe59"
	I1016 19:44:26.266484  489193 cri.go:89] found id: "5c3452355191ade33c815e5b44cedf8fc61d23935ed2003087f7669841b38192"
	I1016 19:44:26.266487  489193 cri.go:89] found id: "e9174935e4924619bbd7b732372997943016a19eebde101e652dde4e3e693e72"
	I1016 19:44:26.266490  489193 cri.go:89] found id: ""
	I1016 19:44:26.266553  489193 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 19:44:26.286206  489193 out.go:203] 
	W1016 19:44:26.289231  489193 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:44:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:44:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 19:44:26.289251  489193 out.go:285] * 
	* 
	W1016 19:44:26.296413  489193 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 19:44:26.301438  489193 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-751669 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-751669
helpers_test.go:243: (dbg) docker inspect embed-certs-751669:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48",
	        "Created": "2025-10-16T19:41:31.536310146Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 484243,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T19:43:22.362725277Z",
	            "FinishedAt": "2025-10-16T19:43:21.206639755Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48/hostname",
	        "HostsPath": "/var/lib/docker/containers/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48/hosts",
	        "LogPath": "/var/lib/docker/containers/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48-json.log",
	        "Name": "/embed-certs-751669",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-751669:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-751669",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48",
	                "LowerDir": "/var/lib/docker/overlay2/cf63f44205295f3d0a02e5980b8f083a596a8cc4d722a04ab4c6c7d58f7ca488-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf63f44205295f3d0a02e5980b8f083a596a8cc4d722a04ab4c6c7d58f7ca488/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf63f44205295f3d0a02e5980b8f083a596a8cc4d722a04ab4c6c7d58f7ca488/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf63f44205295f3d0a02e5980b8f083a596a8cc4d722a04ab4c6c7d58f7ca488/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-751669",
	                "Source": "/var/lib/docker/volumes/embed-certs-751669/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-751669",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-751669",
	                "name.minikube.sigs.k8s.io": "embed-certs-751669",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e36b81ac114ab22afb3ee7e1fc5240d9fc3365d1c4379d5b94b6391f3f1df921",
	            "SandboxKey": "/var/run/docker/netns/e36b81ac114a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-751669": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:90:5d:0e:fa:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "47eda41405f419208be3b296b694c6a50ba0a9ebb091dac0d31792e4b62c69d1",
	                    "EndpointID": "70612c9b22b634133e1511505c251cd8eaf3a8c345712a4494db24eb4ed54835",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-751669",
	                        "6ce556d58dc2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-751669 -n embed-certs-751669
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-751669 -n embed-certs-751669: exit status 2 (397.788136ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-751669 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-751669 logs -n 25: (1.632269223s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-663330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-663330       │ jenkins │ v1.37.0 │ 16 Oct 25 19:40 UTC │ 16 Oct 25 19:40 UTC │
	│ start   │ -p cert-expiration-828182 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-828182       │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ image   │ old-k8s-version-663330 image list --format=json                                                                                                                                                                                               │ old-k8s-version-663330       │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-663330 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-663330       │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │                     │
	│ delete  │ -p old-k8s-version-663330                                                                                                                                                                                                                     │ old-k8s-version-663330       │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ delete  │ -p cert-expiration-828182                                                                                                                                                                                                                     │ cert-expiration-828182       │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-663330                                                                                                                                                                                                                     │ old-k8s-version-663330       │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-225696 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:42 UTC │
	│ start   │ -p embed-certs-751669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p no-preload-225696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:42 UTC │                     │
	│ stop    │ -p no-preload-225696 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:42 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable dashboard -p no-preload-225696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ start   │ -p no-preload-225696 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-751669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │                     │
	│ stop    │ -p embed-certs-751669 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-751669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ start   │ -p embed-certs-751669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:44 UTC │
	│ image   │ no-preload-225696 image list --format=json                                                                                                                                                                                                    │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ pause   │ -p no-preload-225696 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	│ delete  │ -p no-preload-225696                                                                                                                                                                                                                          │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p no-preload-225696                                                                                                                                                                                                                          │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p disable-driver-mounts-031282                                                                                                                                                                                                               │ disable-driver-mounts-031282 │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ start   │ -p default-k8s-diff-port-850436 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	│ image   │ embed-certs-751669 image list --format=json                                                                                                                                                                                                   │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ pause   │ -p embed-certs-751669 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 19:44:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 19:44:14.929081  488039 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:44:14.929282  488039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:44:14.929295  488039 out.go:374] Setting ErrFile to fd 2...
	I1016 19:44:14.929300  488039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:44:14.929573  488039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:44:14.929995  488039 out.go:368] Setting JSON to false
	I1016 19:44:14.930964  488039 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8784,"bootTime":1760635071,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 19:44:14.931041  488039 start.go:141] virtualization:  
	I1016 19:44:14.934959  488039 out.go:179] * [default-k8s-diff-port-850436] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 19:44:14.939172  488039 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 19:44:14.939289  488039 notify.go:220] Checking for updates...
	I1016 19:44:14.945344  488039 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 19:44:14.948443  488039 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:44:14.951423  488039 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 19:44:14.955285  488039 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 19:44:14.958380  488039 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 19:44:14.961970  488039 config.go:182] Loaded profile config "embed-certs-751669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:44:14.962089  488039 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 19:44:14.999146  488039 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 19:44:14.999372  488039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:44:15.085184  488039 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-16 19:44:15.074217821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:44:15.085308  488039 docker.go:318] overlay module found
	I1016 19:44:15.088700  488039 out.go:179] * Using the docker driver based on user configuration
	I1016 19:44:15.091669  488039 start.go:305] selected driver: docker
	I1016 19:44:15.091720  488039 start.go:925] validating driver "docker" against <nil>
	I1016 19:44:15.091745  488039 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 19:44:15.092682  488039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:44:15.151295  488039 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-16 19:44:15.14117728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:44:15.151492  488039 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 19:44:15.152077  488039 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:44:15.155157  488039 out.go:179] * Using Docker driver with root privileges
	I1016 19:44:15.158101  488039 cni.go:84] Creating CNI manager for ""
	I1016 19:44:15.158189  488039 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:44:15.158204  488039 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1016 19:44:15.158298  488039 start.go:349] cluster config:
	{Name:default-k8s-diff-port-850436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-850436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:44:15.163324  488039 out.go:179] * Starting "default-k8s-diff-port-850436" primary control-plane node in "default-k8s-diff-port-850436" cluster
	I1016 19:44:15.166545  488039 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 19:44:15.169643  488039 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 19:44:15.172620  488039 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:44:15.172629  488039 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 19:44:15.172685  488039 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 19:44:15.172697  488039 cache.go:58] Caching tarball of preloaded images
	I1016 19:44:15.172789  488039 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 19:44:15.172799  488039 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 19:44:15.172906  488039 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/config.json ...
	I1016 19:44:15.172922  488039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/config.json: {Name:mkc2c46257e1d78b0da4f553d2a086e651cc5948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:44:15.194525  488039 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 19:44:15.194558  488039 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 19:44:15.194582  488039 cache.go:232] Successfully downloaded all kic artifacts
	I1016 19:44:15.194620  488039 start.go:360] acquireMachinesLock for default-k8s-diff-port-850436: {Name:mk7e6cd57751a3c09c0a04e7fccd20808ff22477 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:44:15.194751  488039 start.go:364] duration metric: took 107.98µs to acquireMachinesLock for "default-k8s-diff-port-850436"
	I1016 19:44:15.194785  488039 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-850436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-850436 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:44:15.194861  488039 start.go:125] createHost starting for "" (driver="docker")
	I1016 19:44:15.198357  488039 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1016 19:44:15.198649  488039 start.go:159] libmachine.API.Create for "default-k8s-diff-port-850436" (driver="docker")
	I1016 19:44:15.198705  488039 client.go:168] LocalClient.Create starting
	I1016 19:44:15.198789  488039 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem
	I1016 19:44:15.198834  488039 main.go:141] libmachine: Decoding PEM data...
	I1016 19:44:15.198853  488039 main.go:141] libmachine: Parsing certificate...
	I1016 19:44:15.198912  488039 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem
	I1016 19:44:15.198941  488039 main.go:141] libmachine: Decoding PEM data...
	I1016 19:44:15.198951  488039 main.go:141] libmachine: Parsing certificate...
	I1016 19:44:15.199363  488039 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-850436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1016 19:44:15.216844  488039 cli_runner.go:211] docker network inspect default-k8s-diff-port-850436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1016 19:44:15.216944  488039 network_create.go:284] running [docker network inspect default-k8s-diff-port-850436] to gather additional debugging logs...
	I1016 19:44:15.216964  488039 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-850436
	W1016 19:44:15.233506  488039 cli_runner.go:211] docker network inspect default-k8s-diff-port-850436 returned with exit code 1
	I1016 19:44:15.233548  488039 network_create.go:287] error running [docker network inspect default-k8s-diff-port-850436]: docker network inspect default-k8s-diff-port-850436: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-850436 not found
	I1016 19:44:15.233563  488039 network_create.go:289] output of [docker network inspect default-k8s-diff-port-850436]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-850436 not found
	
	** /stderr **
	I1016 19:44:15.233663  488039 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:44:15.254013  488039 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7adcf17f22ba IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:ab:9e:ea:f5:d5} reservation:<nil>}
	I1016 19:44:15.254635  488039 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbcb5241e782 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:58:26:d7:8f:45} reservation:<nil>}
	I1016 19:44:15.255068  488039 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-26579fafc836 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:48:af:83:92:ac} reservation:<nil>}
	I1016 19:44:15.255615  488039 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d3210}
	I1016 19:44:15.255638  488039 network_create.go:124] attempt to create docker network default-k8s-diff-port-850436 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1016 19:44:15.255701  488039 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-850436 default-k8s-diff-port-850436
	I1016 19:44:15.318478  488039 network_create.go:108] docker network default-k8s-diff-port-850436 192.168.76.0/24 created
	I1016 19:44:15.318508  488039 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-850436" container
	I1016 19:44:15.318590  488039 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1016 19:44:15.336057  488039 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-850436 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-850436 --label created_by.minikube.sigs.k8s.io=true
	I1016 19:44:15.355412  488039 oci.go:103] Successfully created a docker volume default-k8s-diff-port-850436
	I1016 19:44:15.355507  488039 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-850436-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-850436 --entrypoint /usr/bin/test -v default-k8s-diff-port-850436:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1016 19:44:15.927997  488039 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-850436
	I1016 19:44:15.928050  488039 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:44:15.928070  488039 kic.go:194] Starting extracting preloaded images to volume ...
	I1016 19:44:15.928152  488039 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-850436:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1016 19:44:20.313453  488039 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-850436:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.38526244s)
	I1016 19:44:20.313497  488039 kic.go:203] duration metric: took 4.385424609s to extract preloaded images to volume ...
	W1016 19:44:20.313639  488039 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1016 19:44:20.313747  488039 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1016 19:44:20.369522  488039 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-850436 --name default-k8s-diff-port-850436 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-850436 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-850436 --network default-k8s-diff-port-850436 --ip 192.168.76.2 --volume default-k8s-diff-port-850436:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1016 19:44:20.689914  488039 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Running}}
	I1016 19:44:20.712920  488039 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:44:20.733454  488039 cli_runner.go:164] Run: docker exec default-k8s-diff-port-850436 stat /var/lib/dpkg/alternatives/iptables
	I1016 19:44:20.782303  488039 oci.go:144] the created container "default-k8s-diff-port-850436" has a running status.
	I1016 19:44:20.782335  488039 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa...
	I1016 19:44:21.509545  488039 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1016 19:44:21.530571  488039 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:44:21.547889  488039 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1016 19:44:21.547914  488039 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-850436 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1016 19:44:21.591378  488039 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:44:21.609533  488039 machine.go:93] provisionDockerMachine start ...
	I1016 19:44:21.609655  488039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:44:21.626789  488039 main.go:141] libmachine: Using SSH client type: native
	I1016 19:44:21.627182  488039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1016 19:44:21.627209  488039 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 19:44:21.627872  488039 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 19:44:24.784653  488039 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-850436
	
	I1016 19:44:24.784686  488039 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-850436"
	I1016 19:44:24.784781  488039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:44:24.803381  488039 main.go:141] libmachine: Using SSH client type: native
	I1016 19:44:24.803715  488039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1016 19:44:24.803736  488039 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-850436 && echo "default-k8s-diff-port-850436" | sudo tee /etc/hostname
	
	
	==> CRI-O <==
	Oct 16 19:44:13 embed-certs-751669 crio[648]: time="2025-10-16T19:44:13.673434275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:44:13 embed-certs-751669 crio[648]: time="2025-10-16T19:44:13.687532364Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:44:13 embed-certs-751669 crio[648]: time="2025-10-16T19:44:13.688733706Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:44:13 embed-certs-751669 crio[648]: time="2025-10-16T19:44:13.713584148Z" level=info msg="Created container 5c3452355191ade33c815e5b44cedf8fc61d23935ed2003087f7669841b38192: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q8s56/dashboard-metrics-scraper" id=2626b43b-9627-4831-b372-61ae27be23b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:44:13 embed-certs-751669 crio[648]: time="2025-10-16T19:44:13.714838619Z" level=info msg="Starting container: 5c3452355191ade33c815e5b44cedf8fc61d23935ed2003087f7669841b38192" id=13be0604-6422-473e-9b5c-e7fdd0865c55 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:44:13 embed-certs-751669 crio[648]: time="2025-10-16T19:44:13.718663136Z" level=info msg="Started container" PID=1668 containerID=5c3452355191ade33c815e5b44cedf8fc61d23935ed2003087f7669841b38192 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q8s56/dashboard-metrics-scraper id=13be0604-6422-473e-9b5c-e7fdd0865c55 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c56846c6edcbd919d6d0b16b65b060ed24146109c60d9ced6429ef162112fda
	Oct 16 19:44:13 embed-certs-751669 conmon[1666]: conmon 5c3452355191ade33c81 <ninfo>: container 1668 exited with status 1
	Oct 16 19:44:13 embed-certs-751669 crio[648]: time="2025-10-16T19:44:13.989811446Z" level=info msg="Removing container: 013c2e086ffde7a14ad780015fb08d67eb4104365359a68fddd0d16d5707b3bf" id=ba44707c-883b-45db-9042-979181ca9456 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:44:14 embed-certs-751669 crio[648]: time="2025-10-16T19:44:14.00955355Z" level=info msg="Error loading conmon cgroup of container 013c2e086ffde7a14ad780015fb08d67eb4104365359a68fddd0d16d5707b3bf: cgroup deleted" id=ba44707c-883b-45db-9042-979181ca9456 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:44:14 embed-certs-751669 crio[648]: time="2025-10-16T19:44:14.018037523Z" level=info msg="Removed container 013c2e086ffde7a14ad780015fb08d67eb4104365359a68fddd0d16d5707b3bf: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q8s56/dashboard-metrics-scraper" id=ba44707c-883b-45db-9042-979181ca9456 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.516588327Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.520250545Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.520284195Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.520308302Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.525051672Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.525084641Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.525114672Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.528388627Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.528534286Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.544608323Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.548717988Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.54875277Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.548778239Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.552052547Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.552085688Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5c3452355191a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   2                   4c56846c6edcb       dashboard-metrics-scraper-6ffb444bf9-q8s56   kubernetes-dashboard
	16a1dc880de05       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           17 seconds ago      Running             storage-provisioner         2                   9c8070e7c7bfc       storage-provisioner                          kube-system
	e9174935e4924       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago      Running             kubernetes-dashboard        0                   451a0c5d5601b       kubernetes-dashboard-855c9754f9-m6s27        kubernetes-dashboard
	49a17fa869c65       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           48 seconds ago      Exited              storage-provisioner         1                   9c8070e7c7bfc       storage-provisioner                          kube-system
	eadf94f6b8838       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           48 seconds ago      Running             coredns                     1                   8317e41b60dcb       coredns-66bc5c9577-2h6z6                     kube-system
	d7b2f5278dfd3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           48 seconds ago      Running             kube-proxy                  1                   9341e82762fc9       kube-proxy-lvmlh                             kube-system
	7a43316467528       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           48 seconds ago      Running             busybox                     1                   93b8d2e01ab86       busybox                                      default
	4745f67d64b37       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           48 seconds ago      Running             kindnet-cni                 1                   46d3da138e218       kindnet-cjx87                                kube-system
	8a2a4e8f60de8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           56 seconds ago      Running             etcd                        1                   dbf130d56f804       etcd-embed-certs-751669                      kube-system
	cdb6c8787e866       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           56 seconds ago      Running             kube-apiserver              1                   b4b5661e8c1dc       kube-apiserver-embed-certs-751669            kube-system
	2368c8473fac0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           56 seconds ago      Running             kube-controller-manager     1                   3e2082073b8ed       kube-controller-manager-embed-certs-751669   kube-system
	01a051b12eaa7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           56 seconds ago      Running             kube-scheduler              1                   634d0eea208e4       kube-scheduler-embed-certs-751669            kube-system
	
	
	==> coredns [eadf94f6b8838ac4c1800c834d95152476fd2be03f6ae1e62836a05a0e4e1248] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47730 - 48767 "HINFO IN 2042954503980634162.7778387406156326914. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013589714s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-751669
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-751669
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=embed-certs-751669
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T19_42_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 19:42:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-751669
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:44:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:44:08 +0000   Thu, 16 Oct 2025 19:41:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:44:08 +0000   Thu, 16 Oct 2025 19:41:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:44:08 +0000   Thu, 16 Oct 2025 19:41:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 19:44:08 +0000   Thu, 16 Oct 2025 19:42:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-751669
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                e85b0b1d-7b19-4554-be69-b4ff58296a42
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-2h6z6                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m14s
	  kube-system                 etcd-embed-certs-751669                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m20s
	  kube-system                 kindnet-cjx87                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m15s
	  kube-system                 kube-apiserver-embed-certs-751669             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-controller-manager-embed-certs-751669    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-proxy-lvmlh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-scheduler-embed-certs-751669             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-q8s56    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-m6s27         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m14s                  kube-proxy       
	  Normal   Starting                 48s                    kube-proxy       
	  Normal   Starting                 2m32s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m32s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m32s (x8 over 2m32s)  kubelet          Node embed-certs-751669 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m32s (x8 over 2m32s)  kubelet          Node embed-certs-751669 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m32s (x8 over 2m32s)  kubelet          Node embed-certs-751669 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m21s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m21s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m20s                  kubelet          Node embed-certs-751669 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m20s                  kubelet          Node embed-certs-751669 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m20s                  kubelet          Node embed-certs-751669 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m16s                  node-controller  Node embed-certs-751669 event: Registered Node embed-certs-751669 in Controller
	  Normal   NodeReady                94s                    kubelet          Node embed-certs-751669 status is now: NodeReady
	  Normal   Starting                 57s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 57s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  57s (x8 over 57s)      kubelet          Node embed-certs-751669 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s (x8 over 57s)      kubelet          Node embed-certs-751669 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s (x8 over 57s)      kubelet          Node embed-certs-751669 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           46s                    node-controller  Node embed-certs-751669 event: Registered Node embed-certs-751669 in Controller
	
	
	==> dmesg <==
	[Oct16 19:19] overlayfs: idmapped layers are currently not supported
	[Oct16 19:20] overlayfs: idmapped layers are currently not supported
	[Oct16 19:21] overlayfs: idmapped layers are currently not supported
	[Oct16 19:22] overlayfs: idmapped layers are currently not supported
	[  +5.025487] overlayfs: idmapped layers are currently not supported
	[Oct16 19:23] overlayfs: idmapped layers are currently not supported
	[ +28.397927] overlayfs: idmapped layers are currently not supported
	[Oct16 19:24] overlayfs: idmapped layers are currently not supported
	[ +25.533019] overlayfs: idmapped layers are currently not supported
	[Oct16 19:26] overlayfs: idmapped layers are currently not supported
	[Oct16 19:27] overlayfs: idmapped layers are currently not supported
	[Oct16 19:29] overlayfs: idmapped layers are currently not supported
	[Oct16 19:31] overlayfs: idmapped layers are currently not supported
	[Oct16 19:32] overlayfs: idmapped layers are currently not supported
	[Oct16 19:34] overlayfs: idmapped layers are currently not supported
	[Oct16 19:36] overlayfs: idmapped layers are currently not supported
	[Oct16 19:37] overlayfs: idmapped layers are currently not supported
	[  +8.490329] overlayfs: idmapped layers are currently not supported
	[Oct16 19:38] overlayfs: idmapped layers are currently not supported
	[Oct16 19:39] overlayfs: idmapped layers are currently not supported
	[Oct16 19:40] overlayfs: idmapped layers are currently not supported
	[Oct16 19:41] overlayfs: idmapped layers are currently not supported
	[ +20.605853] overlayfs: idmapped layers are currently not supported
	[Oct16 19:43] overlayfs: idmapped layers are currently not supported
	[ +20.110477] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8a2a4e8f60de83dc93958769d40834e8c6e8098a4d24326639566a8eb761d219] <==
	{"level":"warn","ts":"2025-10-16T19:43:36.238268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.253618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.282610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.298709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.328124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.343218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.359312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.405192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.421203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.456443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.469997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.491916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.509433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.526881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.545809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.562591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.581388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.599046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.620552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.634707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.657392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.684244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.705863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.719795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.850731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39878","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:44:27 up  2:26,  0 user,  load average: 2.83, 3.43, 2.95
	Linux embed-certs-751669 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4745f67d64b371c5ebd81706d5db08ae45a0bc210dfc3842b0e4dfe1592ae79a] <==
	I1016 19:43:39.321562       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 19:43:39.321817       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1016 19:43:39.332890       1 main.go:148] setting mtu 1500 for CNI 
	I1016 19:43:39.337757       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 19:43:39.337788       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T19:43:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 19:43:39.513163       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 19:43:39.513797       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 19:43:39.513870       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 19:43:39.515484       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1016 19:44:09.513412       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1016 19:44:09.514579       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1016 19:44:09.514617       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1016 19:44:09.515800       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1016 19:44:11.115573       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 19:44:11.115684       1 metrics.go:72] Registering metrics
	I1016 19:44:11.115787       1 controller.go:711] "Syncing nftables rules"
	I1016 19:44:19.516165       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1016 19:44:19.516304       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cdb6c8787e86665ba81ed5e2b63948fa8bd322ac9fe2eeaabc3de67e2ae1762a] <==
	I1016 19:43:38.213233       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1016 19:43:38.213211       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1016 19:43:38.213411       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1016 19:43:38.213267       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1016 19:43:38.213944       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1016 19:43:38.213978       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1016 19:43:38.213255       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1016 19:43:38.229651       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 19:43:38.229864       1 cache.go:39] Caches are synced for autoregister controller
	I1016 19:43:38.230226       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 19:43:38.235457       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 19:43:38.242009       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1016 19:43:38.250994       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1016 19:43:38.345109       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1016 19:43:38.631562       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 19:43:38.714804       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 19:43:38.794113       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 19:43:38.924035       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 19:43:39.024491       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 19:43:39.058231       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 19:43:39.292404       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.115.35"}
	I1016 19:43:39.341331       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.109.79"}
	I1016 19:43:41.670104       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 19:43:41.770064       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 19:43:41.871819       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2368c8473fac0e17d1c889c89f8bd36e68e1075d0382ddf4f2ad6c01dcf5819f] <==
	I1016 19:43:41.453224       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:43:41.453247       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 19:43:41.453258       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 19:43:41.456099       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:43:41.460784       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 19:43:41.460992       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1016 19:43:41.461127       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1016 19:43:41.461219       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1016 19:43:41.461250       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1016 19:43:41.461278       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1016 19:43:41.465464       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1016 19:43:41.466016       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1016 19:43:41.466183       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1016 19:43:41.467163       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1016 19:43:41.467214       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1016 19:43:41.467240       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1016 19:43:41.467871       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1016 19:43:41.468157       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1016 19:43:41.472406       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1016 19:43:41.473166       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1016 19:43:41.473258       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-751669"
	I1016 19:43:41.473319       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1016 19:43:41.473345       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1016 19:43:41.480393       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1016 19:43:41.481781       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [d7b2f5278dfd381e2a01706c9311c5dfd9420611b3505d3fe56c9fb7d71a711c] <==
	I1016 19:43:39.407451       1 server_linux.go:53] "Using iptables proxy"
	I1016 19:43:39.555381       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 19:43:39.657264       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 19:43:39.657306       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1016 19:43:39.657398       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 19:43:39.678185       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 19:43:39.678236       1 server_linux.go:132] "Using iptables Proxier"
	I1016 19:43:39.682460       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 19:43:39.682774       1 server.go:527] "Version info" version="v1.34.1"
	I1016 19:43:39.682847       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:43:39.684156       1 config.go:200] "Starting service config controller"
	I1016 19:43:39.684226       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 19:43:39.684272       1 config.go:106] "Starting endpoint slice config controller"
	I1016 19:43:39.684299       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 19:43:39.684346       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 19:43:39.684375       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 19:43:39.685594       1 config.go:309] "Starting node config controller"
	I1016 19:43:39.686161       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 19:43:39.686230       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 19:43:39.784388       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1016 19:43:39.784387       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 19:43:39.784497       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [01a051b12eaa75566bd0ed32bda2684f339c52afc7b5e80f79acc29785a0fe59] <==
	I1016 19:43:33.918295       1 serving.go:386] Generated self-signed cert in-memory
	I1016 19:43:38.502846       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1016 19:43:38.502889       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:43:38.519007       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 19:43:38.519105       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1016 19:43:38.519128       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1016 19:43:38.519156       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 19:43:38.521240       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:43:38.521270       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:43:38.521309       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:43:38.521319       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:43:38.621225       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1016 19:43:38.621470       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:43:38.622178       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 16 19:43:42 embed-certs-751669 kubelet[773]: I1016 19:43:42.026177     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxf8n\" (UniqueName: \"kubernetes.io/projected/9596d11f-85b7-4ce4-b23f-262ed61f7dca-kube-api-access-lxf8n\") pod \"kubernetes-dashboard-855c9754f9-m6s27\" (UID: \"9596d11f-85b7-4ce4-b23f-262ed61f7dca\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m6s27"
	Oct 16 19:43:42 embed-certs-751669 kubelet[773]: I1016 19:43:42.026235     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9596d11f-85b7-4ce4-b23f-262ed61f7dca-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-m6s27\" (UID: \"9596d11f-85b7-4ce4-b23f-262ed61f7dca\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m6s27"
	Oct 16 19:43:42 embed-certs-751669 kubelet[773]: I1016 19:43:42.026261     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jdfh\" (UniqueName: \"kubernetes.io/projected/50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8-kube-api-access-8jdfh\") pod \"dashboard-metrics-scraper-6ffb444bf9-q8s56\" (UID: \"50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q8s56"
	Oct 16 19:43:42 embed-certs-751669 kubelet[773]: I1016 19:43:42.026281     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-q8s56\" (UID: \"50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q8s56"
	Oct 16 19:43:42 embed-certs-751669 kubelet[773]: W1016 19:43:42.293559     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48/crio-451a0c5d5601b300fbc2e3b5d490327e1c59c7780a71cfe29dacbe0840945711 WatchSource:0}: Error finding container 451a0c5d5601b300fbc2e3b5d490327e1c59c7780a71cfe29dacbe0840945711: Status 404 returned error can't find the container with id 451a0c5d5601b300fbc2e3b5d490327e1c59c7780a71cfe29dacbe0840945711
	Oct 16 19:43:42 embed-certs-751669 kubelet[773]: W1016 19:43:42.309112     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48/crio-4c56846c6edcbd919d6d0b16b65b060ed24146109c60d9ced6429ef162112fda WatchSource:0}: Error finding container 4c56846c6edcbd919d6d0b16b65b060ed24146109c60d9ced6429ef162112fda: Status 404 returned error can't find the container with id 4c56846c6edcbd919d6d0b16b65b060ed24146109c60d9ced6429ef162112fda
	Oct 16 19:43:48 embed-certs-751669 kubelet[773]: I1016 19:43:48.289531     773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m6s27" podStartSLOduration=2.084138476 podStartE2EDuration="7.28951067s" podCreationTimestamp="2025-10-16 19:43:41 +0000 UTC" firstStartedPulling="2025-10-16 19:43:42.300234763 +0000 UTC m=+11.903573089" lastFinishedPulling="2025-10-16 19:43:47.505606941 +0000 UTC m=+17.108945283" observedRunningTime="2025-10-16 19:43:47.926103897 +0000 UTC m=+17.529442223" watchObservedRunningTime="2025-10-16 19:43:48.28951067 +0000 UTC m=+17.892849144"
	Oct 16 19:43:52 embed-certs-751669 kubelet[773]: I1016 19:43:52.914990     773 scope.go:117] "RemoveContainer" containerID="900be252e4181d3e9679684d6183af8e3662e2bf59f1ed3fb14f32940e2ca275"
	Oct 16 19:43:53 embed-certs-751669 kubelet[773]: I1016 19:43:53.925984     773 scope.go:117] "RemoveContainer" containerID="013c2e086ffde7a14ad780015fb08d67eb4104365359a68fddd0d16d5707b3bf"
	Oct 16 19:43:53 embed-certs-751669 kubelet[773]: E1016 19:43:53.926209     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q8s56_kubernetes-dashboard(50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q8s56" podUID="50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8"
	Oct 16 19:43:53 embed-certs-751669 kubelet[773]: I1016 19:43:53.926972     773 scope.go:117] "RemoveContainer" containerID="900be252e4181d3e9679684d6183af8e3662e2bf59f1ed3fb14f32940e2ca275"
	Oct 16 19:43:54 embed-certs-751669 kubelet[773]: I1016 19:43:54.929794     773 scope.go:117] "RemoveContainer" containerID="013c2e086ffde7a14ad780015fb08d67eb4104365359a68fddd0d16d5707b3bf"
	Oct 16 19:43:54 embed-certs-751669 kubelet[773]: E1016 19:43:54.930419     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q8s56_kubernetes-dashboard(50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q8s56" podUID="50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8"
	Oct 16 19:44:02 embed-certs-751669 kubelet[773]: I1016 19:44:02.258395     773 scope.go:117] "RemoveContainer" containerID="013c2e086ffde7a14ad780015fb08d67eb4104365359a68fddd0d16d5707b3bf"
	Oct 16 19:44:02 embed-certs-751669 kubelet[773]: E1016 19:44:02.258590     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q8s56_kubernetes-dashboard(50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q8s56" podUID="50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8"
	Oct 16 19:44:09 embed-certs-751669 kubelet[773]: I1016 19:44:09.968099     773 scope.go:117] "RemoveContainer" containerID="49a17fa869c65153404159392f64f8f9f559558f7abb64bc7d124d18bcc2597a"
	Oct 16 19:44:13 embed-certs-751669 kubelet[773]: I1016 19:44:13.668297     773 scope.go:117] "RemoveContainer" containerID="013c2e086ffde7a14ad780015fb08d67eb4104365359a68fddd0d16d5707b3bf"
	Oct 16 19:44:13 embed-certs-751669 kubelet[773]: I1016 19:44:13.984548     773 scope.go:117] "RemoveContainer" containerID="013c2e086ffde7a14ad780015fb08d67eb4104365359a68fddd0d16d5707b3bf"
	Oct 16 19:44:13 embed-certs-751669 kubelet[773]: I1016 19:44:13.985408     773 scope.go:117] "RemoveContainer" containerID="5c3452355191ade33c815e5b44cedf8fc61d23935ed2003087f7669841b38192"
	Oct 16 19:44:13 embed-certs-751669 kubelet[773]: E1016 19:44:13.986051     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q8s56_kubernetes-dashboard(50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q8s56" podUID="50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8"
	Oct 16 19:44:22 embed-certs-751669 kubelet[773]: I1016 19:44:22.260211     773 scope.go:117] "RemoveContainer" containerID="5c3452355191ade33c815e5b44cedf8fc61d23935ed2003087f7669841b38192"
	Oct 16 19:44:22 embed-certs-751669 kubelet[773]: E1016 19:44:22.261259     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q8s56_kubernetes-dashboard(50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q8s56" podUID="50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8"
	Oct 16 19:44:24 embed-certs-751669 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 19:44:24 embed-certs-751669 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 19:44:24 embed-certs-751669 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e9174935e4924619bbd7b732372997943016a19eebde101e652dde4e3e693e72] <==
	2025/10/16 19:43:47 Starting overwatch
	2025/10/16 19:43:47 Using namespace: kubernetes-dashboard
	2025/10/16 19:43:47 Using in-cluster config to connect to apiserver
	2025/10/16 19:43:47 Using secret token for csrf signing
	2025/10/16 19:43:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/16 19:43:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/16 19:43:47 Successful initial request to the apiserver, version: v1.34.1
	2025/10/16 19:43:47 Generating JWE encryption key
	2025/10/16 19:43:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/16 19:43:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/16 19:43:48 Initializing JWE encryption key from synchronized object
	2025/10/16 19:43:48 Creating in-cluster Sidecar client
	2025/10/16 19:43:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 19:43:48 Serving insecurely on HTTP port: 9090
	2025/10/16 19:44:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [16a1dc880de0562e7fa670a01682311e2203be018e5a748bbe56cf1c1f6e3e51] <==
	I1016 19:44:10.051813       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 19:44:10.067760       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 19:44:10.067824       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1016 19:44:10.073643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:13.529550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:17.789774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:21.388961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:24.442541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:27.464940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:27.477675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 19:44:27.477857       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 19:44:27.478058       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-751669_5d6fb421-a01f-4c28-aebd-bfec95a8366a!
	I1016 19:44:27.479182       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3b3bf56d-d1bb-49d9-8a23-b33cfd29d57a", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-751669_5d6fb421-a01f-4c28-aebd-bfec95a8366a became leader
	W1016 19:44:27.503216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:27.510679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 19:44:27.578840       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-751669_5d6fb421-a01f-4c28-aebd-bfec95a8366a!
	
	
	==> storage-provisioner [49a17fa869c65153404159392f64f8f9f559558f7abb64bc7d124d18bcc2597a] <==
	I1016 19:43:39.357437       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1016 19:44:09.359543       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-751669 -n embed-certs-751669
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-751669 -n embed-certs-751669: exit status 2 (500.572493ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-751669 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-751669
helpers_test.go:243: (dbg) docker inspect embed-certs-751669:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48",
	        "Created": "2025-10-16T19:41:31.536310146Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 484243,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T19:43:22.362725277Z",
	            "FinishedAt": "2025-10-16T19:43:21.206639755Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48/hostname",
	        "HostsPath": "/var/lib/docker/containers/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48/hosts",
	        "LogPath": "/var/lib/docker/containers/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48-json.log",
	        "Name": "/embed-certs-751669",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-751669:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-751669",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48",
	                "LowerDir": "/var/lib/docker/overlay2/cf63f44205295f3d0a02e5980b8f083a596a8cc4d722a04ab4c6c7d58f7ca488-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf63f44205295f3d0a02e5980b8f083a596a8cc4d722a04ab4c6c7d58f7ca488/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf63f44205295f3d0a02e5980b8f083a596a8cc4d722a04ab4c6c7d58f7ca488/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf63f44205295f3d0a02e5980b8f083a596a8cc4d722a04ab4c6c7d58f7ca488/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-751669",
	                "Source": "/var/lib/docker/volumes/embed-certs-751669/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-751669",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-751669",
	                "name.minikube.sigs.k8s.io": "embed-certs-751669",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e36b81ac114ab22afb3ee7e1fc5240d9fc3365d1c4379d5b94b6391f3f1df921",
	            "SandboxKey": "/var/run/docker/netns/e36b81ac114a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-751669": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:90:5d:0e:fa:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "47eda41405f419208be3b296b694c6a50ba0a9ebb091dac0d31792e4b62c69d1",
	                    "EndpointID": "70612c9b22b634133e1511505c251cd8eaf3a8c345712a4494db24eb4ed54835",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-751669",
	                        "6ce556d58dc2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-751669 -n embed-certs-751669
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-751669 -n embed-certs-751669: exit status 2 (479.69912ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-751669 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-751669 logs -n 25: (1.71532507s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-663330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-663330       │ jenkins │ v1.37.0 │ 16 Oct 25 19:40 UTC │ 16 Oct 25 19:40 UTC │
	│ start   │ -p cert-expiration-828182 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-828182       │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ image   │ old-k8s-version-663330 image list --format=json                                                                                                                                                                                               │ old-k8s-version-663330       │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-663330 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-663330       │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │                     │
	│ delete  │ -p old-k8s-version-663330                                                                                                                                                                                                                     │ old-k8s-version-663330       │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ delete  │ -p cert-expiration-828182                                                                                                                                                                                                                     │ cert-expiration-828182       │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-663330                                                                                                                                                                                                                     │ old-k8s-version-663330       │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-225696 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:42 UTC │
	│ start   │ -p embed-certs-751669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p no-preload-225696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:42 UTC │                     │
	│ stop    │ -p no-preload-225696 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:42 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable dashboard -p no-preload-225696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ start   │ -p no-preload-225696 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-751669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │                     │
	│ stop    │ -p embed-certs-751669 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-751669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ start   │ -p embed-certs-751669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:44 UTC │
	│ image   │ no-preload-225696 image list --format=json                                                                                                                                                                                                    │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ pause   │ -p no-preload-225696 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	│ delete  │ -p no-preload-225696                                                                                                                                                                                                                          │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p no-preload-225696                                                                                                                                                                                                                          │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p disable-driver-mounts-031282                                                                                                                                                                                                               │ disable-driver-mounts-031282 │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ start   │ -p default-k8s-diff-port-850436 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	│ image   │ embed-certs-751669 image list --format=json                                                                                                                                                                                                   │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ pause   │ -p embed-certs-751669 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 19:44:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 19:44:14.929081  488039 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:44:14.929282  488039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:44:14.929295  488039 out.go:374] Setting ErrFile to fd 2...
	I1016 19:44:14.929300  488039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:44:14.929573  488039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:44:14.929995  488039 out.go:368] Setting JSON to false
	I1016 19:44:14.930964  488039 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8784,"bootTime":1760635071,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 19:44:14.931041  488039 start.go:141] virtualization:  
	I1016 19:44:14.934959  488039 out.go:179] * [default-k8s-diff-port-850436] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 19:44:14.939172  488039 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 19:44:14.939289  488039 notify.go:220] Checking for updates...
	I1016 19:44:14.945344  488039 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 19:44:14.948443  488039 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:44:14.951423  488039 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 19:44:14.955285  488039 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 19:44:14.958380  488039 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 19:44:14.961970  488039 config.go:182] Loaded profile config "embed-certs-751669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:44:14.962089  488039 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 19:44:14.999146  488039 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 19:44:14.999372  488039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:44:15.085184  488039 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-16 19:44:15.074217821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:44:15.085308  488039 docker.go:318] overlay module found
	I1016 19:44:15.088700  488039 out.go:179] * Using the docker driver based on user configuration
	I1016 19:44:15.091669  488039 start.go:305] selected driver: docker
	I1016 19:44:15.091720  488039 start.go:925] validating driver "docker" against <nil>
	I1016 19:44:15.091745  488039 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 19:44:15.092682  488039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:44:15.151295  488039 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-16 19:44:15.14117728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:44:15.151492  488039 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 19:44:15.152077  488039 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:44:15.155157  488039 out.go:179] * Using Docker driver with root privileges
	I1016 19:44:15.158101  488039 cni.go:84] Creating CNI manager for ""
	I1016 19:44:15.158189  488039 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:44:15.158204  488039 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1016 19:44:15.158298  488039 start.go:349] cluster config:
	{Name:default-k8s-diff-port-850436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-850436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:44:15.163324  488039 out.go:179] * Starting "default-k8s-diff-port-850436" primary control-plane node in "default-k8s-diff-port-850436" cluster
	I1016 19:44:15.166545  488039 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 19:44:15.169643  488039 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 19:44:15.172620  488039 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:44:15.172629  488039 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 19:44:15.172685  488039 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 19:44:15.172697  488039 cache.go:58] Caching tarball of preloaded images
	I1016 19:44:15.172789  488039 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 19:44:15.172799  488039 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 19:44:15.172906  488039 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/config.json ...
	I1016 19:44:15.172922  488039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/config.json: {Name:mkc2c46257e1d78b0da4f553d2a086e651cc5948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:44:15.194525  488039 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 19:44:15.194558  488039 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 19:44:15.194582  488039 cache.go:232] Successfully downloaded all kic artifacts
	I1016 19:44:15.194620  488039 start.go:360] acquireMachinesLock for default-k8s-diff-port-850436: {Name:mk7e6cd57751a3c09c0a04e7fccd20808ff22477 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:44:15.194751  488039 start.go:364] duration metric: took 107.98µs to acquireMachinesLock for "default-k8s-diff-port-850436"
	I1016 19:44:15.194785  488039 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-850436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-850436 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:44:15.194861  488039 start.go:125] createHost starting for "" (driver="docker")
	I1016 19:44:15.198357  488039 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1016 19:44:15.198649  488039 start.go:159] libmachine.API.Create for "default-k8s-diff-port-850436" (driver="docker")
	I1016 19:44:15.198705  488039 client.go:168] LocalClient.Create starting
	I1016 19:44:15.198789  488039 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem
	I1016 19:44:15.198834  488039 main.go:141] libmachine: Decoding PEM data...
	I1016 19:44:15.198853  488039 main.go:141] libmachine: Parsing certificate...
	I1016 19:44:15.198912  488039 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem
	I1016 19:44:15.198941  488039 main.go:141] libmachine: Decoding PEM data...
	I1016 19:44:15.198951  488039 main.go:141] libmachine: Parsing certificate...
	I1016 19:44:15.199363  488039 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-850436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1016 19:44:15.216844  488039 cli_runner.go:211] docker network inspect default-k8s-diff-port-850436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1016 19:44:15.216944  488039 network_create.go:284] running [docker network inspect default-k8s-diff-port-850436] to gather additional debugging logs...
	I1016 19:44:15.216964  488039 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-850436
	W1016 19:44:15.233506  488039 cli_runner.go:211] docker network inspect default-k8s-diff-port-850436 returned with exit code 1
	I1016 19:44:15.233548  488039 network_create.go:287] error running [docker network inspect default-k8s-diff-port-850436]: docker network inspect default-k8s-diff-port-850436: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-850436 not found
	I1016 19:44:15.233563  488039 network_create.go:289] output of [docker network inspect default-k8s-diff-port-850436]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-850436 not found
	
	** /stderr **
	I1016 19:44:15.233663  488039 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:44:15.254013  488039 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7adcf17f22ba IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:ab:9e:ea:f5:d5} reservation:<nil>}
	I1016 19:44:15.254635  488039 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbcb5241e782 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:58:26:d7:8f:45} reservation:<nil>}
	I1016 19:44:15.255068  488039 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-26579fafc836 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:48:af:83:92:ac} reservation:<nil>}
	I1016 19:44:15.255615  488039 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d3210}
	I1016 19:44:15.255638  488039 network_create.go:124] attempt to create docker network default-k8s-diff-port-850436 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1016 19:44:15.255701  488039 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-850436 default-k8s-diff-port-850436
	I1016 19:44:15.318478  488039 network_create.go:108] docker network default-k8s-diff-port-850436 192.168.76.0/24 created
	I1016 19:44:15.318508  488039 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-850436" container
	I1016 19:44:15.318590  488039 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1016 19:44:15.336057  488039 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-850436 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-850436 --label created_by.minikube.sigs.k8s.io=true
	I1016 19:44:15.355412  488039 oci.go:103] Successfully created a docker volume default-k8s-diff-port-850436
	I1016 19:44:15.355507  488039 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-850436-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-850436 --entrypoint /usr/bin/test -v default-k8s-diff-port-850436:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1016 19:44:15.927997  488039 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-850436
	I1016 19:44:15.928050  488039 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:44:15.928070  488039 kic.go:194] Starting extracting preloaded images to volume ...
	I1016 19:44:15.928152  488039 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-850436:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1016 19:44:20.313453  488039 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-850436:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.38526244s)
	I1016 19:44:20.313497  488039 kic.go:203] duration metric: took 4.385424609s to extract preloaded images to volume ...
	W1016 19:44:20.313639  488039 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1016 19:44:20.313747  488039 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1016 19:44:20.369522  488039 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-850436 --name default-k8s-diff-port-850436 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-850436 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-850436 --network default-k8s-diff-port-850436 --ip 192.168.76.2 --volume default-k8s-diff-port-850436:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1016 19:44:20.689914  488039 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Running}}
	I1016 19:44:20.712920  488039 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:44:20.733454  488039 cli_runner.go:164] Run: docker exec default-k8s-diff-port-850436 stat /var/lib/dpkg/alternatives/iptables
	I1016 19:44:20.782303  488039 oci.go:144] the created container "default-k8s-diff-port-850436" has a running status.
	I1016 19:44:20.782335  488039 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa...
	I1016 19:44:21.509545  488039 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1016 19:44:21.530571  488039 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:44:21.547889  488039 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1016 19:44:21.547914  488039 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-850436 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1016 19:44:21.591378  488039 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:44:21.609533  488039 machine.go:93] provisionDockerMachine start ...
	I1016 19:44:21.609655  488039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:44:21.626789  488039 main.go:141] libmachine: Using SSH client type: native
	I1016 19:44:21.627182  488039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1016 19:44:21.627209  488039 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 19:44:21.627872  488039 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 19:44:24.784653  488039 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-850436
	
	I1016 19:44:24.784686  488039 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-850436"
	I1016 19:44:24.784781  488039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:44:24.803381  488039 main.go:141] libmachine: Using SSH client type: native
	I1016 19:44:24.803715  488039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1016 19:44:24.803736  488039 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-850436 && echo "default-k8s-diff-port-850436" | sudo tee /etc/hostname
	I1016 19:44:24.970739  488039 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-850436
	
	I1016 19:44:24.970817  488039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:44:24.988593  488039 main.go:141] libmachine: Using SSH client type: native
	I1016 19:44:24.988900  488039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1016 19:44:24.988919  488039 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-850436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-850436/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-850436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 19:44:25.137571  488039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 19:44:25.137595  488039 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 19:44:25.137616  488039 ubuntu.go:190] setting up certificates
	I1016 19:44:25.137626  488039 provision.go:84] configureAuth start
	I1016 19:44:25.137686  488039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-850436
	I1016 19:44:25.155382  488039 provision.go:143] copyHostCerts
	I1016 19:44:25.155461  488039 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 19:44:25.155470  488039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 19:44:25.155550  488039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 19:44:25.155660  488039 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 19:44:25.155665  488039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 19:44:25.155690  488039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 19:44:25.155753  488039 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 19:44:25.155758  488039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 19:44:25.155781  488039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 19:44:25.155835  488039 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-850436 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-850436 localhost minikube]
	I1016 19:44:26.019736  488039 provision.go:177] copyRemoteCerts
	I1016 19:44:26.019868  488039 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 19:44:26.019953  488039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:44:26.050970  488039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:44:26.157930  488039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1016 19:44:26.178030  488039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 19:44:26.199484  488039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 19:44:26.221404  488039 provision.go:87] duration metric: took 1.083754673s to configureAuth
	I1016 19:44:26.221432  488039 ubuntu.go:206] setting minikube options for container-runtime
	I1016 19:44:26.221621  488039 config.go:182] Loaded profile config "default-k8s-diff-port-850436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:44:26.221730  488039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:44:26.241558  488039 main.go:141] libmachine: Using SSH client type: native
	I1016 19:44:26.241868  488039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1016 19:44:26.241883  488039 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 19:44:26.607501  488039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 19:44:26.607526  488039 machine.go:96] duration metric: took 4.997971087s to provisionDockerMachine
	I1016 19:44:26.607536  488039 client.go:171] duration metric: took 11.408821565s to LocalClient.Create
	I1016 19:44:26.607549  488039 start.go:167] duration metric: took 11.408901747s to libmachine.API.Create "default-k8s-diff-port-850436"
	I1016 19:44:26.607569  488039 start.go:293] postStartSetup for "default-k8s-diff-port-850436" (driver="docker")
	I1016 19:44:26.607582  488039 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 19:44:26.607646  488039 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 19:44:26.607686  488039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:44:26.630812  488039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:44:26.738863  488039 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 19:44:26.746955  488039 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 19:44:26.746980  488039 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 19:44:26.746992  488039 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 19:44:26.747047  488039 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 19:44:26.747128  488039 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 19:44:26.747229  488039 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 19:44:26.759310  488039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:44:26.778797  488039 start.go:296] duration metric: took 171.211416ms for postStartSetup
	I1016 19:44:26.779140  488039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-850436
	I1016 19:44:26.798156  488039 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/config.json ...
	I1016 19:44:26.798423  488039 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 19:44:26.798465  488039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:44:26.819755  488039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:44:26.922768  488039 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 19:44:26.928161  488039 start.go:128] duration metric: took 11.733285374s to createHost
	I1016 19:44:26.928183  488039 start.go:83] releasing machines lock for "default-k8s-diff-port-850436", held for 11.733418585s
	I1016 19:44:26.928330  488039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-850436
	I1016 19:44:26.952605  488039 ssh_runner.go:195] Run: cat /version.json
	I1016 19:44:26.952653  488039 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 19:44:26.952659  488039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:44:26.952718  488039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:44:26.988240  488039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:44:27.010552  488039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:44:27.199246  488039 ssh_runner.go:195] Run: systemctl --version
	I1016 19:44:27.206896  488039 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 19:44:27.263685  488039 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 19:44:27.269099  488039 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 19:44:27.269236  488039 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 19:44:27.311584  488039 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1016 19:44:27.311650  488039 start.go:495] detecting cgroup driver to use...
	I1016 19:44:27.311699  488039 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 19:44:27.311780  488039 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 19:44:27.337737  488039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 19:44:27.354310  488039 docker.go:218] disabling cri-docker service (if available) ...
	I1016 19:44:27.354416  488039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 19:44:27.375302  488039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 19:44:27.402500  488039 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 19:44:27.583325  488039 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 19:44:27.753183  488039 docker.go:234] disabling docker service ...
	I1016 19:44:27.753261  488039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 19:44:27.776358  488039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 19:44:27.791799  488039 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 19:44:27.957070  488039 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 19:44:28.126040  488039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 19:44:28.144743  488039 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 19:44:28.162695  488039 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 19:44:28.162808  488039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:44:28.175047  488039 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 19:44:28.175161  488039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:44:28.186595  488039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:44:28.195292  488039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:44:28.205036  488039 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 19:44:28.213328  488039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:44:28.223969  488039 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:44:28.240731  488039 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:44:28.254005  488039 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 19:44:28.262524  488039 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 19:44:28.270575  488039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:44:28.441696  488039 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 19:44:28.629437  488039 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 19:44:28.629510  488039 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 19:44:28.634292  488039 start.go:563] Will wait 60s for crictl version
	I1016 19:44:28.634359  488039 ssh_runner.go:195] Run: which crictl
	I1016 19:44:28.639847  488039 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 19:44:28.679425  488039 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 19:44:28.679513  488039 ssh_runner.go:195] Run: crio --version
	I1016 19:44:28.716770  488039 ssh_runner.go:195] Run: crio --version
	I1016 19:44:28.763764  488039 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 16 19:44:13 embed-certs-751669 crio[648]: time="2025-10-16T19:44:13.673434275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:44:13 embed-certs-751669 crio[648]: time="2025-10-16T19:44:13.687532364Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:44:13 embed-certs-751669 crio[648]: time="2025-10-16T19:44:13.688733706Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:44:13 embed-certs-751669 crio[648]: time="2025-10-16T19:44:13.713584148Z" level=info msg="Created container 5c3452355191ade33c815e5b44cedf8fc61d23935ed2003087f7669841b38192: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q8s56/dashboard-metrics-scraper" id=2626b43b-9627-4831-b372-61ae27be23b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:44:13 embed-certs-751669 crio[648]: time="2025-10-16T19:44:13.714838619Z" level=info msg="Starting container: 5c3452355191ade33c815e5b44cedf8fc61d23935ed2003087f7669841b38192" id=13be0604-6422-473e-9b5c-e7fdd0865c55 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:44:13 embed-certs-751669 crio[648]: time="2025-10-16T19:44:13.718663136Z" level=info msg="Started container" PID=1668 containerID=5c3452355191ade33c815e5b44cedf8fc61d23935ed2003087f7669841b38192 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q8s56/dashboard-metrics-scraper id=13be0604-6422-473e-9b5c-e7fdd0865c55 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c56846c6edcbd919d6d0b16b65b060ed24146109c60d9ced6429ef162112fda
	Oct 16 19:44:13 embed-certs-751669 conmon[1666]: conmon 5c3452355191ade33c81 <ninfo>: container 1668 exited with status 1
	Oct 16 19:44:13 embed-certs-751669 crio[648]: time="2025-10-16T19:44:13.989811446Z" level=info msg="Removing container: 013c2e086ffde7a14ad780015fb08d67eb4104365359a68fddd0d16d5707b3bf" id=ba44707c-883b-45db-9042-979181ca9456 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:44:14 embed-certs-751669 crio[648]: time="2025-10-16T19:44:14.00955355Z" level=info msg="Error loading conmon cgroup of container 013c2e086ffde7a14ad780015fb08d67eb4104365359a68fddd0d16d5707b3bf: cgroup deleted" id=ba44707c-883b-45db-9042-979181ca9456 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:44:14 embed-certs-751669 crio[648]: time="2025-10-16T19:44:14.018037523Z" level=info msg="Removed container 013c2e086ffde7a14ad780015fb08d67eb4104365359a68fddd0d16d5707b3bf: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q8s56/dashboard-metrics-scraper" id=ba44707c-883b-45db-9042-979181ca9456 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.516588327Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.520250545Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.520284195Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.520308302Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.525051672Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.525084641Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.525114672Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.528388627Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.528534286Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.544608323Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.548717988Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.54875277Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.548778239Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.552052547Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:44:19 embed-certs-751669 crio[648]: time="2025-10-16T19:44:19.552085688Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5c3452355191a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   2                   4c56846c6edcb       dashboard-metrics-scraper-6ffb444bf9-q8s56   kubernetes-dashboard
	16a1dc880de05       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   9c8070e7c7bfc       storage-provisioner                          kube-system
	e9174935e4924       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago      Running             kubernetes-dashboard        0                   451a0c5d5601b       kubernetes-dashboard-855c9754f9-m6s27        kubernetes-dashboard
	49a17fa869c65       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   9c8070e7c7bfc       storage-provisioner                          kube-system
	eadf94f6b8838       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago      Running             coredns                     1                   8317e41b60dcb       coredns-66bc5c9577-2h6z6                     kube-system
	d7b2f5278dfd3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago      Running             kube-proxy                  1                   9341e82762fc9       kube-proxy-lvmlh                             kube-system
	7a43316467528       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   93b8d2e01ab86       busybox                                      default
	4745f67d64b37       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   46d3da138e218       kindnet-cjx87                                kube-system
	8a2a4e8f60de8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   dbf130d56f804       etcd-embed-certs-751669                      kube-system
	cdb6c8787e866       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   b4b5661e8c1dc       kube-apiserver-embed-certs-751669            kube-system
	2368c8473fac0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   3e2082073b8ed       kube-controller-manager-embed-certs-751669   kube-system
	01a051b12eaa7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   634d0eea208e4       kube-scheduler-embed-certs-751669            kube-system
	
	
	==> coredns [eadf94f6b8838ac4c1800c834d95152476fd2be03f6ae1e62836a05a0e4e1248] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47730 - 48767 "HINFO IN 2042954503980634162.7778387406156326914. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013589714s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-751669
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-751669
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=embed-certs-751669
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T19_42_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 19:42:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-751669
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:44:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:44:08 +0000   Thu, 16 Oct 2025 19:41:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:44:08 +0000   Thu, 16 Oct 2025 19:41:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:44:08 +0000   Thu, 16 Oct 2025 19:41:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 19:44:08 +0000   Thu, 16 Oct 2025 19:42:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-751669
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                e85b0b1d-7b19-4554-be69-b4ff58296a42
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-2h6z6                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m17s
	  kube-system                 etcd-embed-certs-751669                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m23s
	  kube-system                 kindnet-cjx87                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-embed-certs-751669             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-embed-certs-751669    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-lvmlh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-embed-certs-751669             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-q8s56    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-m6s27         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m17s                  kube-proxy       
	  Normal   Starting                 50s                    kube-proxy       
	  Normal   Starting                 2m35s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m35s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node embed-certs-751669 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node embed-certs-751669 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s (x8 over 2m35s)  kubelet          Node embed-certs-751669 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m24s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m24s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m23s                  kubelet          Node embed-certs-751669 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m23s                  kubelet          Node embed-certs-751669 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m23s                  kubelet          Node embed-certs-751669 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m19s                  node-controller  Node embed-certs-751669 event: Registered Node embed-certs-751669 in Controller
	  Normal   NodeReady                97s                    kubelet          Node embed-certs-751669 status is now: NodeReady
	  Normal   Starting                 60s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 60s)      kubelet          Node embed-certs-751669 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 60s)      kubelet          Node embed-certs-751669 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 60s)      kubelet          Node embed-certs-751669 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                    node-controller  Node embed-certs-751669 event: Registered Node embed-certs-751669 in Controller
	
	
	==> dmesg <==
	[Oct16 19:19] overlayfs: idmapped layers are currently not supported
	[Oct16 19:20] overlayfs: idmapped layers are currently not supported
	[Oct16 19:21] overlayfs: idmapped layers are currently not supported
	[Oct16 19:22] overlayfs: idmapped layers are currently not supported
	[  +5.025487] overlayfs: idmapped layers are currently not supported
	[Oct16 19:23] overlayfs: idmapped layers are currently not supported
	[ +28.397927] overlayfs: idmapped layers are currently not supported
	[Oct16 19:24] overlayfs: idmapped layers are currently not supported
	[ +25.533019] overlayfs: idmapped layers are currently not supported
	[Oct16 19:26] overlayfs: idmapped layers are currently not supported
	[Oct16 19:27] overlayfs: idmapped layers are currently not supported
	[Oct16 19:29] overlayfs: idmapped layers are currently not supported
	[Oct16 19:31] overlayfs: idmapped layers are currently not supported
	[Oct16 19:32] overlayfs: idmapped layers are currently not supported
	[Oct16 19:34] overlayfs: idmapped layers are currently not supported
	[Oct16 19:36] overlayfs: idmapped layers are currently not supported
	[Oct16 19:37] overlayfs: idmapped layers are currently not supported
	[  +8.490329] overlayfs: idmapped layers are currently not supported
	[Oct16 19:38] overlayfs: idmapped layers are currently not supported
	[Oct16 19:39] overlayfs: idmapped layers are currently not supported
	[Oct16 19:40] overlayfs: idmapped layers are currently not supported
	[Oct16 19:41] overlayfs: idmapped layers are currently not supported
	[ +20.605853] overlayfs: idmapped layers are currently not supported
	[Oct16 19:43] overlayfs: idmapped layers are currently not supported
	[ +20.110477] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8a2a4e8f60de83dc93958769d40834e8c6e8098a4d24326639566a8eb761d219] <==
	{"level":"warn","ts":"2025-10-16T19:43:36.238268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.253618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.282610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.298709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.328124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.343218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.359312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.405192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.421203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.456443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.469997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.491916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.509433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.526881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.545809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.562591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.581388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.599046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.620552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.634707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.657392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.684244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.705863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.719795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:43:36.850731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39878","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:44:30 up  2:26,  0 user,  load average: 2.76, 3.41, 2.95
	Linux embed-certs-751669 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4745f67d64b371c5ebd81706d5db08ae45a0bc210dfc3842b0e4dfe1592ae79a] <==
	I1016 19:43:39.321562       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 19:43:39.321817       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1016 19:43:39.332890       1 main.go:148] setting mtu 1500 for CNI 
	I1016 19:43:39.337757       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 19:43:39.337788       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T19:43:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 19:43:39.513163       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 19:43:39.513797       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 19:43:39.513870       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 19:43:39.515484       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1016 19:44:09.513412       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1016 19:44:09.514579       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1016 19:44:09.514617       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1016 19:44:09.515800       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1016 19:44:11.115573       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 19:44:11.115684       1 metrics.go:72] Registering metrics
	I1016 19:44:11.115787       1 controller.go:711] "Syncing nftables rules"
	I1016 19:44:19.516165       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1016 19:44:19.516304       1 main.go:301] handling current node
	I1016 19:44:29.518803       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1016 19:44:29.518836       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cdb6c8787e86665ba81ed5e2b63948fa8bd322ac9fe2eeaabc3de67e2ae1762a] <==
	I1016 19:43:38.213233       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1016 19:43:38.213211       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1016 19:43:38.213411       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1016 19:43:38.213267       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1016 19:43:38.213944       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1016 19:43:38.213978       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1016 19:43:38.213255       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1016 19:43:38.229651       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 19:43:38.229864       1 cache.go:39] Caches are synced for autoregister controller
	I1016 19:43:38.230226       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 19:43:38.235457       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 19:43:38.242009       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1016 19:43:38.250994       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1016 19:43:38.345109       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1016 19:43:38.631562       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 19:43:38.714804       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 19:43:38.794113       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 19:43:38.924035       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 19:43:39.024491       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 19:43:39.058231       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 19:43:39.292404       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.115.35"}
	I1016 19:43:39.341331       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.109.79"}
	I1016 19:43:41.670104       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 19:43:41.770064       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 19:43:41.871819       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2368c8473fac0e17d1c889c89f8bd36e68e1075d0382ddf4f2ad6c01dcf5819f] <==
	I1016 19:43:41.453224       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:43:41.453247       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 19:43:41.453258       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 19:43:41.456099       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:43:41.460784       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 19:43:41.460992       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1016 19:43:41.461127       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1016 19:43:41.461219       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1016 19:43:41.461250       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1016 19:43:41.461278       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1016 19:43:41.465464       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1016 19:43:41.466016       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1016 19:43:41.466183       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1016 19:43:41.467163       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1016 19:43:41.467214       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1016 19:43:41.467240       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1016 19:43:41.467871       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1016 19:43:41.468157       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1016 19:43:41.472406       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1016 19:43:41.473166       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1016 19:43:41.473258       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-751669"
	I1016 19:43:41.473319       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1016 19:43:41.473345       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1016 19:43:41.480393       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1016 19:43:41.481781       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [d7b2f5278dfd381e2a01706c9311c5dfd9420611b3505d3fe56c9fb7d71a711c] <==
	I1016 19:43:39.407451       1 server_linux.go:53] "Using iptables proxy"
	I1016 19:43:39.555381       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 19:43:39.657264       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 19:43:39.657306       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1016 19:43:39.657398       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 19:43:39.678185       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 19:43:39.678236       1 server_linux.go:132] "Using iptables Proxier"
	I1016 19:43:39.682460       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 19:43:39.682774       1 server.go:527] "Version info" version="v1.34.1"
	I1016 19:43:39.682847       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:43:39.684156       1 config.go:200] "Starting service config controller"
	I1016 19:43:39.684226       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 19:43:39.684272       1 config.go:106] "Starting endpoint slice config controller"
	I1016 19:43:39.684299       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 19:43:39.684346       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 19:43:39.684375       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 19:43:39.685594       1 config.go:309] "Starting node config controller"
	I1016 19:43:39.686161       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 19:43:39.686230       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 19:43:39.784388       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1016 19:43:39.784387       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 19:43:39.784497       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [01a051b12eaa75566bd0ed32bda2684f339c52afc7b5e80f79acc29785a0fe59] <==
	I1016 19:43:33.918295       1 serving.go:386] Generated self-signed cert in-memory
	I1016 19:43:38.502846       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1016 19:43:38.502889       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:43:38.519007       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 19:43:38.519105       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1016 19:43:38.519128       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1016 19:43:38.519156       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 19:43:38.521240       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:43:38.521270       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:43:38.521309       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:43:38.521319       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:43:38.621225       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1016 19:43:38.621470       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:43:38.622178       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 16 19:43:42 embed-certs-751669 kubelet[773]: I1016 19:43:42.026177     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxf8n\" (UniqueName: \"kubernetes.io/projected/9596d11f-85b7-4ce4-b23f-262ed61f7dca-kube-api-access-lxf8n\") pod \"kubernetes-dashboard-855c9754f9-m6s27\" (UID: \"9596d11f-85b7-4ce4-b23f-262ed61f7dca\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m6s27"
	Oct 16 19:43:42 embed-certs-751669 kubelet[773]: I1016 19:43:42.026235     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9596d11f-85b7-4ce4-b23f-262ed61f7dca-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-m6s27\" (UID: \"9596d11f-85b7-4ce4-b23f-262ed61f7dca\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m6s27"
	Oct 16 19:43:42 embed-certs-751669 kubelet[773]: I1016 19:43:42.026261     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jdfh\" (UniqueName: \"kubernetes.io/projected/50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8-kube-api-access-8jdfh\") pod \"dashboard-metrics-scraper-6ffb444bf9-q8s56\" (UID: \"50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q8s56"
	Oct 16 19:43:42 embed-certs-751669 kubelet[773]: I1016 19:43:42.026281     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-q8s56\" (UID: \"50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q8s56"
	Oct 16 19:43:42 embed-certs-751669 kubelet[773]: W1016 19:43:42.293559     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48/crio-451a0c5d5601b300fbc2e3b5d490327e1c59c7780a71cfe29dacbe0840945711 WatchSource:0}: Error finding container 451a0c5d5601b300fbc2e3b5d490327e1c59c7780a71cfe29dacbe0840945711: Status 404 returned error can't find the container with id 451a0c5d5601b300fbc2e3b5d490327e1c59c7780a71cfe29dacbe0840945711
	Oct 16 19:43:42 embed-certs-751669 kubelet[773]: W1016 19:43:42.309112     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6ce556d58dc26cb293cb8fd819eb2437e1a1d61d90cb1177d9d262f2ec79cb48/crio-4c56846c6edcbd919d6d0b16b65b060ed24146109c60d9ced6429ef162112fda WatchSource:0}: Error finding container 4c56846c6edcbd919d6d0b16b65b060ed24146109c60d9ced6429ef162112fda: Status 404 returned error can't find the container with id 4c56846c6edcbd919d6d0b16b65b060ed24146109c60d9ced6429ef162112fda
	Oct 16 19:43:48 embed-certs-751669 kubelet[773]: I1016 19:43:48.289531     773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m6s27" podStartSLOduration=2.084138476 podStartE2EDuration="7.28951067s" podCreationTimestamp="2025-10-16 19:43:41 +0000 UTC" firstStartedPulling="2025-10-16 19:43:42.300234763 +0000 UTC m=+11.903573089" lastFinishedPulling="2025-10-16 19:43:47.505606941 +0000 UTC m=+17.108945283" observedRunningTime="2025-10-16 19:43:47.926103897 +0000 UTC m=+17.529442223" watchObservedRunningTime="2025-10-16 19:43:48.28951067 +0000 UTC m=+17.892849144"
	Oct 16 19:43:52 embed-certs-751669 kubelet[773]: I1016 19:43:52.914990     773 scope.go:117] "RemoveContainer" containerID="900be252e4181d3e9679684d6183af8e3662e2bf59f1ed3fb14f32940e2ca275"
	Oct 16 19:43:53 embed-certs-751669 kubelet[773]: I1016 19:43:53.925984     773 scope.go:117] "RemoveContainer" containerID="013c2e086ffde7a14ad780015fb08d67eb4104365359a68fddd0d16d5707b3bf"
	Oct 16 19:43:53 embed-certs-751669 kubelet[773]: E1016 19:43:53.926209     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q8s56_kubernetes-dashboard(50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q8s56" podUID="50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8"
	Oct 16 19:43:53 embed-certs-751669 kubelet[773]: I1016 19:43:53.926972     773 scope.go:117] "RemoveContainer" containerID="900be252e4181d3e9679684d6183af8e3662e2bf59f1ed3fb14f32940e2ca275"
	Oct 16 19:43:54 embed-certs-751669 kubelet[773]: I1016 19:43:54.929794     773 scope.go:117] "RemoveContainer" containerID="013c2e086ffde7a14ad780015fb08d67eb4104365359a68fddd0d16d5707b3bf"
	Oct 16 19:43:54 embed-certs-751669 kubelet[773]: E1016 19:43:54.930419     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q8s56_kubernetes-dashboard(50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q8s56" podUID="50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8"
	Oct 16 19:44:02 embed-certs-751669 kubelet[773]: I1016 19:44:02.258395     773 scope.go:117] "RemoveContainer" containerID="013c2e086ffde7a14ad780015fb08d67eb4104365359a68fddd0d16d5707b3bf"
	Oct 16 19:44:02 embed-certs-751669 kubelet[773]: E1016 19:44:02.258590     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q8s56_kubernetes-dashboard(50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q8s56" podUID="50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8"
	Oct 16 19:44:09 embed-certs-751669 kubelet[773]: I1016 19:44:09.968099     773 scope.go:117] "RemoveContainer" containerID="49a17fa869c65153404159392f64f8f9f559558f7abb64bc7d124d18bcc2597a"
	Oct 16 19:44:13 embed-certs-751669 kubelet[773]: I1016 19:44:13.668297     773 scope.go:117] "RemoveContainer" containerID="013c2e086ffde7a14ad780015fb08d67eb4104365359a68fddd0d16d5707b3bf"
	Oct 16 19:44:13 embed-certs-751669 kubelet[773]: I1016 19:44:13.984548     773 scope.go:117] "RemoveContainer" containerID="013c2e086ffde7a14ad780015fb08d67eb4104365359a68fddd0d16d5707b3bf"
	Oct 16 19:44:13 embed-certs-751669 kubelet[773]: I1016 19:44:13.985408     773 scope.go:117] "RemoveContainer" containerID="5c3452355191ade33c815e5b44cedf8fc61d23935ed2003087f7669841b38192"
	Oct 16 19:44:13 embed-certs-751669 kubelet[773]: E1016 19:44:13.986051     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q8s56_kubernetes-dashboard(50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q8s56" podUID="50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8"
	Oct 16 19:44:22 embed-certs-751669 kubelet[773]: I1016 19:44:22.260211     773 scope.go:117] "RemoveContainer" containerID="5c3452355191ade33c815e5b44cedf8fc61d23935ed2003087f7669841b38192"
	Oct 16 19:44:22 embed-certs-751669 kubelet[773]: E1016 19:44:22.261259     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q8s56_kubernetes-dashboard(50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q8s56" podUID="50bd7fd7-9b4e-4a67-b92a-4ef1c803d1a8"
	Oct 16 19:44:24 embed-certs-751669 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 19:44:24 embed-certs-751669 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 19:44:24 embed-certs-751669 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e9174935e4924619bbd7b732372997943016a19eebde101e652dde4e3e693e72] <==
	2025/10/16 19:43:47 Using namespace: kubernetes-dashboard
	2025/10/16 19:43:47 Using in-cluster config to connect to apiserver
	2025/10/16 19:43:47 Using secret token for csrf signing
	2025/10/16 19:43:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/16 19:43:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/16 19:43:47 Successful initial request to the apiserver, version: v1.34.1
	2025/10/16 19:43:47 Generating JWE encryption key
	2025/10/16 19:43:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/16 19:43:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/16 19:43:48 Initializing JWE encryption key from synchronized object
	2025/10/16 19:43:48 Creating in-cluster Sidecar client
	2025/10/16 19:43:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 19:43:48 Serving insecurely on HTTP port: 9090
	2025/10/16 19:44:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 19:43:47 Starting overwatch
	
	
	==> storage-provisioner [16a1dc880de0562e7fa670a01682311e2203be018e5a748bbe56cf1c1f6e3e51] <==
	I1016 19:44:10.051813       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 19:44:10.067760       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 19:44:10.067824       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1016 19:44:10.073643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:13.529550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:17.789774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:21.388961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:24.442541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:27.464940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:27.477675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 19:44:27.477857       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 19:44:27.478058       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-751669_5d6fb421-a01f-4c28-aebd-bfec95a8366a!
	I1016 19:44:27.479182       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3b3bf56d-d1bb-49d9-8a23-b33cfd29d57a", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-751669_5d6fb421-a01f-4c28-aebd-bfec95a8366a became leader
	W1016 19:44:27.503216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:27.510679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 19:44:27.578840       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-751669_5d6fb421-a01f-4c28-aebd-bfec95a8366a!
	W1016 19:44:29.513960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:44:29.518953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [49a17fa869c65153404159392f64f8f9f559558f7abb64bc7d124d18bcc2597a] <==
	I1016 19:43:39.357437       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1016 19:44:09.359543       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-751669 -n embed-certs-751669
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-751669 -n embed-certs-751669: exit status 2 (472.649418ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-751669 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-408495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-408495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (273.099626ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:45:16Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-408495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-408495
helpers_test.go:243: (dbg) docker inspect newest-cni-408495:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84",
	        "Created": "2025-10-16T19:44:41.200270265Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 491728,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T19:44:41.269812233Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84/hosts",
	        "LogPath": "/var/lib/docker/containers/fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84/fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84-json.log",
	        "Name": "/newest-cni-408495",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-408495:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-408495",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84",
	                "LowerDir": "/var/lib/docker/overlay2/a62320e2d2184bb8592ab3447890777471b3d5ecc07825c30e50a8feaf660a01-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a62320e2d2184bb8592ab3447890777471b3d5ecc07825c30e50a8feaf660a01/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a62320e2d2184bb8592ab3447890777471b3d5ecc07825c30e50a8feaf660a01/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a62320e2d2184bb8592ab3447890777471b3d5ecc07825c30e50a8feaf660a01/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-408495",
	                "Source": "/var/lib/docker/volumes/newest-cni-408495/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-408495",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-408495",
	                "name.minikube.sigs.k8s.io": "newest-cni-408495",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c2fc996c5e41181ebaf129b5b103fda71920604f700038288cc3f10865ae599",
	            "SandboxKey": "/var/run/docker/netns/1c2fc996c5e4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-408495": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:28:c9:21:21:02",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f3e824b0b22d0962642ad84d54a8f1c5049220ee34215d539c66435401df6a38",
	                    "EndpointID": "742ff21a370d5919df1de79d54ca7695d5d6600e1f3ac6eb485e880e8f8d0d4c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-408495",
	                        "fc99bb32a05a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-408495 -n newest-cni-408495
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-408495 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-408495 logs -n 25: (1.081361077s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-663330                                                                                                                                                                                                                     │ old-k8s-version-663330       │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ delete  │ -p cert-expiration-828182                                                                                                                                                                                                                     │ cert-expiration-828182       │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-663330                                                                                                                                                                                                                     │ old-k8s-version-663330       │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-225696 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:42 UTC │
	│ start   │ -p embed-certs-751669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:41 UTC │ 16 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p no-preload-225696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:42 UTC │                     │
	│ stop    │ -p no-preload-225696 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:42 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable dashboard -p no-preload-225696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ start   │ -p no-preload-225696 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-751669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │                     │
	│ stop    │ -p embed-certs-751669 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-751669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ start   │ -p embed-certs-751669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:44 UTC │
	│ image   │ no-preload-225696 image list --format=json                                                                                                                                                                                                    │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ pause   │ -p no-preload-225696 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	│ delete  │ -p no-preload-225696                                                                                                                                                                                                                          │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p no-preload-225696                                                                                                                                                                                                                          │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p disable-driver-mounts-031282                                                                                                                                                                                                               │ disable-driver-mounts-031282 │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ start   │ -p default-k8s-diff-port-850436 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	│ image   │ embed-certs-751669 image list --format=json                                                                                                                                                                                                   │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ pause   │ -p embed-certs-751669 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	│ delete  │ -p embed-certs-751669                                                                                                                                                                                                                         │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p embed-certs-751669                                                                                                                                                                                                                         │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ start   │ -p newest-cni-408495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:45 UTC │
	│ addons  │ enable metrics-server -p newest-cni-408495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 19:44:35
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 19:44:35.112012  491255 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:44:35.112638  491255 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:44:35.112678  491255 out.go:374] Setting ErrFile to fd 2...
	I1016 19:44:35.112702  491255 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:44:35.113044  491255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:44:35.113580  491255 out.go:368] Setting JSON to false
	I1016 19:44:35.114741  491255 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8805,"bootTime":1760635071,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 19:44:35.114848  491255 start.go:141] virtualization:  
	I1016 19:44:35.120781  491255 out.go:179] * [newest-cni-408495] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 19:44:35.124058  491255 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 19:44:35.124095  491255 notify.go:220] Checking for updates...
	I1016 19:44:35.130339  491255 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 19:44:35.133488  491255 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:44:35.136513  491255 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 19:44:35.140357  491255 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 19:44:35.143263  491255 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 19:44:35.146634  491255 config.go:182] Loaded profile config "default-k8s-diff-port-850436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:44:35.146753  491255 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 19:44:35.186914  491255 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 19:44:35.187049  491255 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:44:35.287106  491255 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-16 19:44:35.276448447 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:44:35.287218  491255 docker.go:318] overlay module found
	I1016 19:44:35.290292  491255 out.go:179] * Using the docker driver based on user configuration
	I1016 19:44:35.293115  491255 start.go:305] selected driver: docker
	I1016 19:44:35.293222  491255 start.go:925] validating driver "docker" against <nil>
	I1016 19:44:35.293240  491255 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 19:44:35.293993  491255 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:44:35.388812  491255 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-16 19:44:35.378721224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:44:35.388968  491255 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1016 19:44:35.389003  491255 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1016 19:44:35.389277  491255 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1016 19:44:35.392033  491255 out.go:179] * Using Docker driver with root privileges
	I1016 19:44:35.394792  491255 cni.go:84] Creating CNI manager for ""
	I1016 19:44:35.394868  491255 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:44:35.394880  491255 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1016 19:44:35.394954  491255 start.go:349] cluster config:
	{Name:newest-cni-408495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-408495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:44:35.400233  491255 out.go:179] * Starting "newest-cni-408495" primary control-plane node in "newest-cni-408495" cluster
	I1016 19:44:35.403049  491255 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 19:44:35.406137  491255 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 19:44:35.409107  491255 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:44:35.409252  491255 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 19:44:35.409272  491255 cache.go:58] Caching tarball of preloaded images
	I1016 19:44:35.409360  491255 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 19:44:35.409370  491255 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 19:44:35.409486  491255 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/config.json ...
	I1016 19:44:35.409504  491255 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/config.json: {Name:mk634787b98c2c992ae0e89f45716dd267d183da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:44:35.409662  491255 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 19:44:35.431125  491255 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 19:44:35.431146  491255 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 19:44:35.431159  491255 cache.go:232] Successfully downloaded all kic artifacts
	I1016 19:44:35.431181  491255 start.go:360] acquireMachinesLock for newest-cni-408495: {Name:mk4f5bcb30afe2773f49aca4b6c534db2867d41f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:44:35.431295  491255 start.go:364] duration metric: took 91.119µs to acquireMachinesLock for "newest-cni-408495"
	I1016 19:44:35.431321  491255 start.go:93] Provisioning new machine with config: &{Name:newest-cni-408495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-408495 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:44:35.431416  491255 start.go:125] createHost starting for "" (driver="docker")
	I1016 19:44:35.441482  488039 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 19:44:36.108176  488039 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 19:44:36.363798  488039 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 19:44:36.364368  488039 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 19:44:37.805523  488039 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 19:44:39.479313  488039 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 19:44:39.809444  488039 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 19:44:35.435051  491255 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1016 19:44:35.435295  491255 start.go:159] libmachine.API.Create for "newest-cni-408495" (driver="docker")
	I1016 19:44:35.435334  491255 client.go:168] LocalClient.Create starting
	I1016 19:44:35.435402  491255 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem
	I1016 19:44:35.435436  491255 main.go:141] libmachine: Decoding PEM data...
	I1016 19:44:35.435449  491255 main.go:141] libmachine: Parsing certificate...
	I1016 19:44:35.435505  491255 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem
	I1016 19:44:35.435522  491255 main.go:141] libmachine: Decoding PEM data...
	I1016 19:44:35.435532  491255 main.go:141] libmachine: Parsing certificate...
	I1016 19:44:35.435901  491255 cli_runner.go:164] Run: docker network inspect newest-cni-408495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1016 19:44:35.452996  491255 cli_runner.go:211] docker network inspect newest-cni-408495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1016 19:44:35.453078  491255 network_create.go:284] running [docker network inspect newest-cni-408495] to gather additional debugging logs...
	I1016 19:44:35.453097  491255 cli_runner.go:164] Run: docker network inspect newest-cni-408495
	W1016 19:44:35.474425  491255 cli_runner.go:211] docker network inspect newest-cni-408495 returned with exit code 1
	I1016 19:44:35.474459  491255 network_create.go:287] error running [docker network inspect newest-cni-408495]: docker network inspect newest-cni-408495: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-408495 not found
	I1016 19:44:35.474482  491255 network_create.go:289] output of [docker network inspect newest-cni-408495]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-408495 not found
	
	** /stderr **
	I1016 19:44:35.474600  491255 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:44:35.499634  491255 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7adcf17f22ba IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:ab:9e:ea:f5:d5} reservation:<nil>}
	I1016 19:44:35.500000  491255 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbcb5241e782 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:58:26:d7:8f:45} reservation:<nil>}
	I1016 19:44:35.500235  491255 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-26579fafc836 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:48:af:83:92:ac} reservation:<nil>}
	I1016 19:44:35.500535  491255 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-12c5ab8893cd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3a:81:24:94:43:92} reservation:<nil>}
	I1016 19:44:35.500936  491255 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e3510}
	I1016 19:44:35.500958  491255 network_create.go:124] attempt to create docker network newest-cni-408495 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1016 19:44:35.501014  491255 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-408495 newest-cni-408495
	I1016 19:44:35.574765  491255 network_create.go:108] docker network newest-cni-408495 192.168.85.0/24 created
	I1016 19:44:35.574795  491255 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-408495" container
	I1016 19:44:35.574892  491255 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1016 19:44:35.609750  491255 cli_runner.go:164] Run: docker volume create newest-cni-408495 --label name.minikube.sigs.k8s.io=newest-cni-408495 --label created_by.minikube.sigs.k8s.io=true
	I1016 19:44:35.630758  491255 oci.go:103] Successfully created a docker volume newest-cni-408495
	I1016 19:44:35.630851  491255 cli_runner.go:164] Run: docker run --rm --name newest-cni-408495-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-408495 --entrypoint /usr/bin/test -v newest-cni-408495:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1016 19:44:36.269686  491255 oci.go:107] Successfully prepared a docker volume newest-cni-408495
	I1016 19:44:36.269734  491255 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:44:36.269754  491255 kic.go:194] Starting extracting preloaded images to volume ...
	I1016 19:44:36.269838  491255 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-408495:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1016 19:44:40.426700  488039 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 19:44:40.602687  488039 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 19:44:40.603018  488039 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 19:44:40.605303  488039 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 19:44:40.623640  488039 out.go:252]   - Booting up control plane ...
	I1016 19:44:40.623767  488039 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 19:44:40.623846  488039 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 19:44:40.623919  488039 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 19:44:40.635083  488039 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 19:44:40.635193  488039 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 19:44:40.643919  488039 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 19:44:40.644024  488039 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 19:44:40.644066  488039 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 19:44:40.785665  488039 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 19:44:40.785816  488039 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 19:44:42.284229  488039 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501823046s
	I1016 19:44:42.288802  488039 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 19:44:42.288900  488039 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1016 19:44:42.289260  488039 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 19:44:42.290026  488039 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 19:44:41.089292  491255 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-408495:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.819396116s)
	I1016 19:44:41.089326  491255 kic.go:203] duration metric: took 4.819569304s to extract preloaded images to volume ...
	W1016 19:44:41.089455  491255 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1016 19:44:41.089560  491255 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1016 19:44:41.181280  491255 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-408495 --name newest-cni-408495 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-408495 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-408495 --network newest-cni-408495 --ip 192.168.85.2 --volume newest-cni-408495:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1016 19:44:41.549904  491255 cli_runner.go:164] Run: docker container inspect newest-cni-408495 --format={{.State.Running}}
	I1016 19:44:41.575718  491255 cli_runner.go:164] Run: docker container inspect newest-cni-408495 --format={{.State.Status}}
	I1016 19:44:41.599958  491255 cli_runner.go:164] Run: docker exec newest-cni-408495 stat /var/lib/dpkg/alternatives/iptables
	I1016 19:44:41.659960  491255 oci.go:144] the created container "newest-cni-408495" has a running status.
	I1016 19:44:41.659995  491255 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa...
	I1016 19:44:41.979714  491255 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1016 19:44:42.014928  491255 cli_runner.go:164] Run: docker container inspect newest-cni-408495 --format={{.State.Status}}
	I1016 19:44:42.046161  491255 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1016 19:44:42.046181  491255 kic_runner.go:114] Args: [docker exec --privileged newest-cni-408495 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1016 19:44:42.135517  491255 cli_runner.go:164] Run: docker container inspect newest-cni-408495 --format={{.State.Status}}
	I1016 19:44:42.177552  491255 machine.go:93] provisionDockerMachine start ...
	I1016 19:44:42.177660  491255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:44:42.206774  491255 main.go:141] libmachine: Using SSH client type: native
	I1016 19:44:42.207143  491255 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1016 19:44:42.207164  491255 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 19:44:42.207988  491255 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 19:44:45.430692  491255 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-408495
	
	I1016 19:44:45.430722  491255 ubuntu.go:182] provisioning hostname "newest-cni-408495"
	I1016 19:44:45.430802  491255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:44:45.467009  491255 main.go:141] libmachine: Using SSH client type: native
	I1016 19:44:45.467343  491255 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1016 19:44:45.467355  491255 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-408495 && echo "newest-cni-408495" | sudo tee /etc/hostname
	I1016 19:44:45.663227  491255 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-408495
	
	I1016 19:44:45.663317  491255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:44:45.696998  491255 main.go:141] libmachine: Using SSH client type: native
	I1016 19:44:45.697345  491255 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1016 19:44:45.697364  491255 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-408495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-408495/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-408495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 19:44:45.890648  491255 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 19:44:45.890676  491255 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 19:44:45.890696  491255 ubuntu.go:190] setting up certificates
	I1016 19:44:45.890741  491255 provision.go:84] configureAuth start
	I1016 19:44:45.890811  491255 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-408495
	I1016 19:44:45.917559  491255 provision.go:143] copyHostCerts
	I1016 19:44:45.917627  491255 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 19:44:45.917637  491255 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 19:44:45.917717  491255 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 19:44:45.917805  491255 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 19:44:45.917810  491255 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 19:44:45.917845  491255 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 19:44:45.917894  491255 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 19:44:45.917898  491255 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 19:44:45.917920  491255 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 19:44:45.917964  491255 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.newest-cni-408495 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-408495]
	I1016 19:44:46.192706  491255 provision.go:177] copyRemoteCerts
	I1016 19:44:46.192776  491255 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 19:44:46.192823  491255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:44:46.211237  491255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:44:46.318266  491255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 19:44:46.350389  491255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1016 19:44:46.383313  491255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 19:44:46.411632  491255 provision.go:87] duration metric: took 520.866452ms to configureAuth
	I1016 19:44:46.411669  491255 ubuntu.go:206] setting minikube options for container-runtime
	I1016 19:44:46.411897  491255 config.go:182] Loaded profile config "newest-cni-408495": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:44:46.412012  491255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:44:46.442668  491255 main.go:141] libmachine: Using SSH client type: native
	I1016 19:44:46.442987  491255 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1016 19:44:46.443008  491255 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 19:44:46.837646  491255 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 19:44:46.837713  491255 machine.go:96] duration metric: took 4.660135662s to provisionDockerMachine
	I1016 19:44:46.837737  491255 client.go:171] duration metric: took 11.402396667s to LocalClient.Create
	I1016 19:44:46.837766  491255 start.go:167] duration metric: took 11.402473419s to libmachine.API.Create "newest-cni-408495"
	I1016 19:44:46.837801  491255 start.go:293] postStartSetup for "newest-cni-408495" (driver="docker")
	I1016 19:44:46.837848  491255 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 19:44:46.837960  491255 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 19:44:46.838023  491255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:44:46.869234  491255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:44:46.979505  491255 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 19:44:46.983042  491255 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 19:44:46.983074  491255 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 19:44:46.983086  491255 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 19:44:46.983141  491255 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 19:44:46.983231  491255 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 19:44:46.983343  491255 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 19:44:46.993697  491255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:44:47.026608  491255 start.go:296] duration metric: took 188.772081ms for postStartSetup
	I1016 19:44:47.027002  491255 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-408495
	I1016 19:44:47.050793  491255 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/config.json ...
	I1016 19:44:47.051071  491255 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 19:44:47.051119  491255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:44:47.075422  491255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:44:47.189560  491255 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 19:44:47.197794  491255 start.go:128] duration metric: took 11.766363008s to createHost
	I1016 19:44:47.197822  491255 start.go:83] releasing machines lock for "newest-cni-408495", held for 11.766517799s
	I1016 19:44:47.197893  491255 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-408495
	I1016 19:44:47.234668  491255 ssh_runner.go:195] Run: cat /version.json
	I1016 19:44:47.234725  491255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:44:47.234968  491255 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 19:44:47.235023  491255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:44:47.271626  491255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:44:47.282709  491255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:44:47.524590  491255 ssh_runner.go:195] Run: systemctl --version
	I1016 19:44:47.533735  491255 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 19:44:47.603374  491255 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 19:44:47.608610  491255 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 19:44:47.608708  491255 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 19:44:47.649576  491255 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1016 19:44:47.649626  491255 start.go:495] detecting cgroup driver to use...
	I1016 19:44:47.649661  491255 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 19:44:47.649730  491255 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 19:44:47.676379  491255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 19:44:47.694899  491255 docker.go:218] disabling cri-docker service (if available) ...
	I1016 19:44:47.694982  491255 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 19:44:47.715948  491255 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 19:44:47.735816  491255 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 19:44:47.938427  491255 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 19:44:48.131422  491255 docker.go:234] disabling docker service ...
	I1016 19:44:48.131504  491255 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 19:44:48.167337  491255 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 19:44:48.186204  491255 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 19:44:48.385775  491255 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 19:44:48.535728  491255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 19:44:48.563263  491255 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 19:44:48.593019  491255 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 19:44:48.593106  491255 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:44:48.602714  491255 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 19:44:48.602793  491255 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:44:48.612350  491255 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:44:48.621697  491255 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:44:48.631350  491255 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 19:44:48.640451  491255 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:44:48.649702  491255 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:44:48.664344  491255 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:44:48.673952  491255 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 19:44:48.682817  491255 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 19:44:48.691050  491255 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:44:48.829207  491255 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 19:44:48.973727  491255 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 19:44:48.973819  491255 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 19:44:48.978094  491255 start.go:563] Will wait 60s for crictl version
	I1016 19:44:48.978172  491255 ssh_runner.go:195] Run: which crictl
	I1016 19:44:48.985623  491255 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 19:44:49.023551  491255 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 19:44:49.023740  491255 ssh_runner.go:195] Run: crio --version
	I1016 19:44:49.099490  491255 ssh_runner.go:195] Run: crio --version
	I1016 19:44:49.159139  491255 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 19:44:49.162149  491255 cli_runner.go:164] Run: docker network inspect newest-cni-408495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:44:49.187149  491255 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1016 19:44:49.192311  491255 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:44:49.211219  491255 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1016 19:44:46.542144  488039 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.252225899s
	I1016 19:44:48.444839  488039 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.154457224s
	I1016 19:44:49.291466  488039 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.002242607s
	I1016 19:44:49.321263  488039 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 19:44:49.338982  488039 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 19:44:49.364389  488039 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 19:44:49.364607  488039 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-850436 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 19:44:49.379387  488039 kubeadm.go:318] [bootstrap-token] Using token: 86y8fq.tyygnnh9z297pn7i
	I1016 19:44:49.382263  488039 out.go:252]   - Configuring RBAC rules ...
	I1016 19:44:49.382425  488039 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 19:44:49.388938  488039 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 19:44:49.409889  488039 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 19:44:49.417541  488039 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 19:44:49.432289  488039 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 19:44:49.442994  488039 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 19:44:49.699283  488039 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 19:44:49.214098  491255 kubeadm.go:883] updating cluster {Name:newest-cni-408495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-408495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 19:44:49.214236  491255 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:44:49.214313  491255 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:44:49.255297  491255 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:44:49.255320  491255 crio.go:433] Images already preloaded, skipping extraction
	I1016 19:44:49.255383  491255 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:44:49.305111  491255 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:44:49.305148  491255 cache_images.go:85] Images are preloaded, skipping loading
	I1016 19:44:49.305157  491255 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1016 19:44:49.305246  491255 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-408495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-408495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 19:44:49.305332  491255 ssh_runner.go:195] Run: crio config
	I1016 19:44:49.400538  491255 cni.go:84] Creating CNI manager for ""
	I1016 19:44:49.400604  491255 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:44:49.400637  491255 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1016 19:44:49.400693  491255 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-408495 NodeName:newest-cni-408495 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 19:44:49.400857  491255 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-408495"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 19:44:49.400945  491255 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 19:44:49.412806  491255 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 19:44:49.412916  491255 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 19:44:49.424665  491255 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1016 19:44:49.439763  491255 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 19:44:49.454672  491255 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1016 19:44:49.467962  491255 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1016 19:44:49.471839  491255 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:44:49.481657  491255 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:44:49.611502  491255 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:44:49.628875  491255 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495 for IP: 192.168.85.2
	I1016 19:44:49.628898  491255 certs.go:195] generating shared ca certs ...
	I1016 19:44:49.628916  491255 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:44:49.629057  491255 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 19:44:49.629110  491255 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 19:44:49.629123  491255 certs.go:257] generating profile certs ...
	I1016 19:44:49.629233  491255 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/client.key
	I1016 19:44:49.629250  491255 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/client.crt with IP's: []
	I1016 19:44:49.916511  491255 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/client.crt ...
	I1016 19:44:49.916546  491255 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/client.crt: {Name:mk4cb39414f5d2675426800d580a7cef689ba21b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:44:49.916754  491255 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/client.key ...
	I1016 19:44:49.916769  491255 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/client.key: {Name:mkcee3a17c49bbc62a9245f3e39218a14a6940ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:44:49.916877  491255 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/apiserver.key.3eb76944
	I1016 19:44:49.916897  491255 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/apiserver.crt.3eb76944 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1016 19:44:50.223039  488039 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 19:44:50.718595  488039 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 19:44:50.720324  488039 kubeadm.go:318] 
	I1016 19:44:50.720410  488039 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 19:44:50.720416  488039 kubeadm.go:318] 
	I1016 19:44:50.720497  488039 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 19:44:50.720502  488039 kubeadm.go:318] 
	I1016 19:44:50.720528  488039 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 19:44:50.721107  488039 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 19:44:50.721248  488039 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 19:44:50.721256  488039 kubeadm.go:318] 
	I1016 19:44:50.721313  488039 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 19:44:50.721318  488039 kubeadm.go:318] 
	I1016 19:44:50.721367  488039 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 19:44:50.721372  488039 kubeadm.go:318] 
	I1016 19:44:50.721426  488039 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 19:44:50.721504  488039 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 19:44:50.721575  488039 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 19:44:50.721580  488039 kubeadm.go:318] 
	I1016 19:44:50.722174  488039 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 19:44:50.722257  488039 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 19:44:50.722262  488039 kubeadm.go:318] 
	I1016 19:44:50.722610  488039 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token 86y8fq.tyygnnh9z297pn7i \
	I1016 19:44:50.722724  488039 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:85a5ed38e8d77f9704e1d1edb99d03efd3921606f6351dbe0cfb02fc48526847 \
	I1016 19:44:50.722985  488039 kubeadm.go:318] 	--control-plane 
	I1016 19:44:50.722996  488039 kubeadm.go:318] 
	I1016 19:44:50.723354  488039 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 19:44:50.723364  488039 kubeadm.go:318] 
	I1016 19:44:50.724088  488039 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token 86y8fq.tyygnnh9z297pn7i \
	I1016 19:44:50.724687  488039 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:85a5ed38e8d77f9704e1d1edb99d03efd3921606f6351dbe0cfb02fc48526847 
	I1016 19:44:50.753399  488039 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1016 19:44:50.753652  488039 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1016 19:44:50.753770  488039 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1016 19:44:50.753842  488039 cni.go:84] Creating CNI manager for ""
	I1016 19:44:50.753871  488039 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:44:50.759249  488039 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 19:44:50.762184  488039 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 19:44:50.784394  488039 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 19:44:50.784410  488039 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 19:44:50.835397  488039 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 19:44:51.343383  488039 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 19:44:51.343526  488039 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:44:51.343596  488039 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-850436 minikube.k8s.io/updated_at=2025_10_16T19_44_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=default-k8s-diff-port-850436 minikube.k8s.io/primary=true
	I1016 19:44:51.635956  488039 ops.go:34] apiserver oom_adj: -16
	I1016 19:44:51.636049  488039 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:44:52.136164  488039 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:44:52.636824  488039 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:44:53.136171  488039 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:44:53.637006  488039 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:44:54.136612  488039 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:44:54.637165  488039 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:44:50.259053  491255 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/apiserver.crt.3eb76944 ...
	I1016 19:44:50.259126  491255 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/apiserver.crt.3eb76944: {Name:mka061a34daf7671a730aaf3f4cebba21e93b1b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:44:50.259365  491255 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/apiserver.key.3eb76944 ...
	I1016 19:44:50.259404  491255 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/apiserver.key.3eb76944: {Name:mk60872836d59b2d10bca7b4785003690900f239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:44:50.259527  491255 certs.go:382] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/apiserver.crt.3eb76944 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/apiserver.crt
	I1016 19:44:50.259639  491255 certs.go:386] copying /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/apiserver.key.3eb76944 -> /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/apiserver.key
	I1016 19:44:50.259742  491255 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/proxy-client.key
	I1016 19:44:50.259790  491255 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/proxy-client.crt with IP's: []
	I1016 19:44:50.702170  491255 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/proxy-client.crt ...
	I1016 19:44:50.702242  491255 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/proxy-client.crt: {Name:mk8f8abd05dfa94206206963e7f64a65e4fee510 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:44:50.702438  491255 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/proxy-client.key ...
	I1016 19:44:50.702476  491255 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/proxy-client.key: {Name:mk4ed322f2fddd909ae772450f9e1899960cdee8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:44:50.702712  491255 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 19:44:50.702793  491255 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 19:44:50.702820  491255 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 19:44:50.702876  491255 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 19:44:50.702923  491255 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 19:44:50.702973  491255 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 19:44:50.703039  491255 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:44:50.703698  491255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 19:44:50.741187  491255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 19:44:50.783014  491255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 19:44:50.814348  491255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 19:44:50.851313  491255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1016 19:44:50.880395  491255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 19:44:50.914302  491255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 19:44:50.947742  491255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1016 19:44:50.980278  491255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 19:44:51.017346  491255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 19:44:51.061108  491255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 19:44:51.109057  491255 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 19:44:51.135898  491255 ssh_runner.go:195] Run: openssl version
	I1016 19:44:51.148190  491255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 19:44:51.162887  491255 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:44:51.174212  491255 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:44:51.174327  491255 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:44:51.228841  491255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 19:44:51.241953  491255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 19:44:51.254290  491255 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 19:44:51.260332  491255 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 19:44:51.260463  491255 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 19:44:51.316874  491255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 19:44:51.329069  491255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 19:44:51.339027  491255 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 19:44:51.344096  491255 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 19:44:51.344156  491255 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 19:44:51.406925  491255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 19:44:51.428387  491255 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 19:44:51.433309  491255 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1016 19:44:51.433384  491255 kubeadm.go:400] StartCluster: {Name:newest-cni-408495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-408495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:44:51.433466  491255 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 19:44:51.433539  491255 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 19:44:51.513852  491255 cri.go:89] found id: ""
	I1016 19:44:51.514092  491255 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 19:44:51.529669  491255 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 19:44:51.540661  491255 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1016 19:44:51.540784  491255 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 19:44:51.558090  491255 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1016 19:44:51.558170  491255 kubeadm.go:157] found existing configuration files:
	
	I1016 19:44:51.558272  491255 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1016 19:44:51.572394  491255 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1016 19:44:51.572539  491255 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1016 19:44:51.583170  491255 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1016 19:44:51.600536  491255 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1016 19:44:51.600685  491255 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 19:44:51.611507  491255 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1016 19:44:51.620969  491255 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1016 19:44:51.621083  491255 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1016 19:44:51.633108  491255 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1016 19:44:51.648536  491255 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1016 19:44:51.648608  491255 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 19:44:51.660411  491255 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1016 19:44:51.762170  491255 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1016 19:44:51.762477  491255 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1016 19:44:51.853699  491255 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1016 19:44:55.136491  488039 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:44:55.636104  488039 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:44:55.798798  488039 kubeadm.go:1113] duration metric: took 4.455303886s to wait for elevateKubeSystemPrivileges
	I1016 19:44:55.798832  488039 kubeadm.go:402] duration metric: took 24.881875723s to StartCluster
	I1016 19:44:55.798851  488039 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:44:55.798916  488039 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:44:55.799680  488039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:44:55.799898  488039 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:44:55.799990  488039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 19:44:55.800211  488039 config.go:182] Loaded profile config "default-k8s-diff-port-850436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:44:55.800253  488039 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 19:44:55.800326  488039 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-850436"
	I1016 19:44:55.800344  488039 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-850436"
	I1016 19:44:55.800371  488039 host.go:66] Checking if "default-k8s-diff-port-850436" exists ...
	I1016 19:44:55.801012  488039 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:44:55.801411  488039 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-850436"
	I1016 19:44:55.801435  488039 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-850436"
	I1016 19:44:55.801710  488039 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:44:55.804362  488039 out.go:179] * Verifying Kubernetes components...
	I1016 19:44:55.807748  488039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:44:55.834069  488039 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-850436"
	I1016 19:44:55.834117  488039 host.go:66] Checking if "default-k8s-diff-port-850436" exists ...
	I1016 19:44:55.834567  488039 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:44:55.853884  488039 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 19:44:55.857503  488039 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:44:55.857533  488039 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 19:44:55.857619  488039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:44:55.879703  488039 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 19:44:55.879723  488039 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 19:44:55.879785  488039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:44:55.906595  488039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:44:55.921259  488039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:44:56.398138  488039 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 19:44:56.454898  488039 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:44:56.539199  488039 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:44:56.539442  488039 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 19:44:57.929572  488039 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.390101606s)
	I1016 19:44:57.929602  488039 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.390371197s)
	I1016 19:44:57.929631  488039 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.474699258s)
	I1016 19:44:57.929605  488039 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1016 19:44:57.930331  488039 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-850436" to be "Ready" ...
	I1016 19:44:57.932916  488039 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1016 19:44:57.935950  488039 addons.go:514] duration metric: took 2.13566525s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1016 19:44:58.436007  488039 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-850436" context rescaled to 1 replicas
	W1016 19:44:59.935600  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	W1016 19:45:02.433471  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	W1016 19:45:04.934461  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	W1016 19:45:07.434084  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	I1016 19:45:10.135935  491255 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1016 19:45:10.135997  491255 kubeadm.go:318] [preflight] Running pre-flight checks
	I1016 19:45:10.136093  491255 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1016 19:45:10.136163  491255 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1016 19:45:10.136207  491255 kubeadm.go:318] OS: Linux
	I1016 19:45:10.136262  491255 kubeadm.go:318] CGROUPS_CPU: enabled
	I1016 19:45:10.136318  491255 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1016 19:45:10.136373  491255 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1016 19:45:10.136425  491255 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1016 19:45:10.136479  491255 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1016 19:45:10.136535  491255 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1016 19:45:10.136587  491255 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1016 19:45:10.136641  491255 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1016 19:45:10.136693  491255 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1016 19:45:10.136769  491255 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1016 19:45:10.136869  491255 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1016 19:45:10.136963  491255 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1016 19:45:10.137030  491255 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1016 19:45:10.140052  491255 out.go:252]   - Generating certificates and keys ...
	I1016 19:45:10.140159  491255 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 19:45:10.140237  491255 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 19:45:10.140310  491255 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1016 19:45:10.140371  491255 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1016 19:45:10.140435  491255 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1016 19:45:10.140490  491255 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1016 19:45:10.140548  491255 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1016 19:45:10.140680  491255 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-408495] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1016 19:45:10.140737  491255 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1016 19:45:10.140870  491255 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-408495] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1016 19:45:10.140940  491255 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 19:45:10.141007  491255 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 19:45:10.141074  491255 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 19:45:10.141171  491255 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 19:45:10.141227  491255 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 19:45:10.141288  491255 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 19:45:10.141345  491255 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 19:45:10.141410  491255 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 19:45:10.141469  491255 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 19:45:10.141553  491255 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 19:45:10.141624  491255 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 19:45:10.146413  491255 out.go:252]   - Booting up control plane ...
	I1016 19:45:10.146554  491255 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 19:45:10.146643  491255 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 19:45:10.146717  491255 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 19:45:10.146837  491255 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 19:45:10.146940  491255 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 19:45:10.147053  491255 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 19:45:10.147166  491255 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 19:45:10.147225  491255 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 19:45:10.147361  491255 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 19:45:10.147475  491255 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 19:45:10.147556  491255 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000837115s
	I1016 19:45:10.147681  491255 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 19:45:10.147769  491255 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1016 19:45:10.147860  491255 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 19:45:10.147941  491255 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 19:45:10.148019  491255 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.370437871s
	I1016 19:45:10.148090  491255 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.463304562s
	I1016 19:45:10.148160  491255 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.002217157s
	I1016 19:45:10.148275  491255 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 19:45:10.148405  491255 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 19:45:10.148474  491255 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 19:45:10.148658  491255 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-408495 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 19:45:10.148717  491255 kubeadm.go:318] [bootstrap-token] Using token: qpvpfj.8vssuc7hca3aj50s
	I1016 19:45:10.151875  491255 out.go:252]   - Configuring RBAC rules ...
	I1016 19:45:10.152020  491255 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 19:45:10.152111  491255 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 19:45:10.152291  491255 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 19:45:10.152455  491255 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 19:45:10.152588  491255 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 19:45:10.152679  491255 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 19:45:10.152821  491255 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 19:45:10.152879  491255 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 19:45:10.152932  491255 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 19:45:10.152939  491255 kubeadm.go:318] 
	I1016 19:45:10.152998  491255 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 19:45:10.153008  491255 kubeadm.go:318] 
	I1016 19:45:10.153127  491255 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 19:45:10.153252  491255 kubeadm.go:318] 
	I1016 19:45:10.153300  491255 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 19:45:10.153367  491255 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 19:45:10.153425  491255 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 19:45:10.153439  491255 kubeadm.go:318] 
	I1016 19:45:10.153521  491255 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 19:45:10.153533  491255 kubeadm.go:318] 
	I1016 19:45:10.153593  491255 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 19:45:10.153603  491255 kubeadm.go:318] 
	I1016 19:45:10.153658  491255 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 19:45:10.153754  491255 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 19:45:10.153843  491255 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 19:45:10.153854  491255 kubeadm.go:318] 
	I1016 19:45:10.153947  491255 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 19:45:10.154042  491255 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 19:45:10.154051  491255 kubeadm.go:318] 
	I1016 19:45:10.154148  491255 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token qpvpfj.8vssuc7hca3aj50s \
	I1016 19:45:10.154261  491255 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:85a5ed38e8d77f9704e1d1edb99d03efd3921606f6351dbe0cfb02fc48526847 \
	I1016 19:45:10.154286  491255 kubeadm.go:318] 	--control-plane 
	I1016 19:45:10.154294  491255 kubeadm.go:318] 
	I1016 19:45:10.154404  491255 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 19:45:10.154421  491255 kubeadm.go:318] 
	I1016 19:45:10.154505  491255 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token qpvpfj.8vssuc7hca3aj50s \
	I1016 19:45:10.154622  491255 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:85a5ed38e8d77f9704e1d1edb99d03efd3921606f6351dbe0cfb02fc48526847 
	I1016 19:45:10.154643  491255 cni.go:84] Creating CNI manager for ""
	I1016 19:45:10.154657  491255 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:45:10.157616  491255 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 19:45:10.160767  491255 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 19:45:10.165501  491255 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 19:45:10.165522  491255 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 19:45:10.180668  491255 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 19:45:10.498085  491255 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 19:45:10.498229  491255 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:45:10.498306  491255 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-408495 minikube.k8s.io/updated_at=2025_10_16T19_45_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=newest-cni-408495 minikube.k8s.io/primary=true
	I1016 19:45:10.707292  491255 ops.go:34] apiserver oom_adj: -16
	I1016 19:45:10.707405  491255 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:45:11.207966  491255 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:45:11.707875  491255 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:45:12.207535  491255 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:45:12.708139  491255 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:45:13.208479  491255 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:45:13.707545  491255 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:45:14.207850  491255 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:45:14.707935  491255 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:45:14.887571  491255 kubeadm.go:1113] duration metric: took 4.389392014s to wait for elevateKubeSystemPrivileges
	I1016 19:45:14.887596  491255 kubeadm.go:402] duration metric: took 23.45421601s to StartCluster
	I1016 19:45:14.887613  491255 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:45:14.887675  491255 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:45:14.888675  491255 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:45:14.888891  491255 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:45:14.889025  491255 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 19:45:14.889314  491255 config.go:182] Loaded profile config "newest-cni-408495": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:45:14.889355  491255 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 19:45:14.889418  491255 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-408495"
	I1016 19:45:14.889432  491255 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-408495"
	I1016 19:45:14.889452  491255 host.go:66] Checking if "newest-cni-408495" exists ...
	I1016 19:45:14.890127  491255 cli_runner.go:164] Run: docker container inspect newest-cni-408495 --format={{.State.Status}}
	I1016 19:45:14.890520  491255 addons.go:69] Setting default-storageclass=true in profile "newest-cni-408495"
	I1016 19:45:14.890548  491255 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-408495"
	I1016 19:45:14.890839  491255 cli_runner.go:164] Run: docker container inspect newest-cni-408495 --format={{.State.Status}}
	I1016 19:45:14.893276  491255 out.go:179] * Verifying Kubernetes components...
	I1016 19:45:14.897363  491255 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:45:14.925643  491255 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1016 19:45:09.934196  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	W1016 19:45:12.433123  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	W1016 19:45:14.433950  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	I1016 19:45:14.928500  491255 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:45:14.928523  491255 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 19:45:14.928598  491255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:14.940737  491255 addons.go:238] Setting addon default-storageclass=true in "newest-cni-408495"
	I1016 19:45:14.940775  491255 host.go:66] Checking if "newest-cni-408495" exists ...
	I1016 19:45:14.941250  491255 cli_runner.go:164] Run: docker container inspect newest-cni-408495 --format={{.State.Status}}
	I1016 19:45:14.961271  491255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:45:14.982651  491255 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 19:45:14.982672  491255 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 19:45:14.982764  491255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:15.010854  491255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:45:15.299984  491255 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 19:45:15.300110  491255 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:45:15.300224  491255 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 19:45:15.378844  491255 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:45:15.974067  491255 api_server.go:52] waiting for apiserver process to appear ...
	I1016 19:45:15.974187  491255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 19:45:15.974334  491255 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1016 19:45:16.200274  491255 api_server.go:72] duration metric: took 1.311355712s to wait for apiserver process to appear ...
	I1016 19:45:16.200299  491255 api_server.go:88] waiting for apiserver healthz status ...
	I1016 19:45:16.200319  491255 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:45:16.203806  491255 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1016 19:45:16.207557  491255 addons.go:514] duration metric: took 1.318191571s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1016 19:45:16.215014  491255 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1016 19:45:16.216200  491255 api_server.go:141] control plane version: v1.34.1
	I1016 19:45:16.216226  491255 api_server.go:131] duration metric: took 15.920721ms to wait for apiserver health ...
	I1016 19:45:16.216235  491255 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 19:45:16.219985  491255 system_pods.go:59] 8 kube-system pods found
	I1016 19:45:16.220017  491255 system_pods.go:61] "coredns-66bc5c9577-wd562" [7e3e6903-1b13-40d0-91ee-345356eedde4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1016 19:45:16.220026  491255 system_pods.go:61] "etcd-newest-cni-408495" [0e5cebea-13bb-4784-9247-5a021cc3b89d] Running
	I1016 19:45:16.220032  491255 system_pods.go:61] "kindnet-9sr6p" [02d047e5-f3d9-4ab8-8c5d-70f6efb82f39] Running
	I1016 19:45:16.220039  491255 system_pods.go:61] "kube-apiserver-newest-cni-408495" [a564226f-8d4d-4f8a-8129-116f7fde1dad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 19:45:16.220045  491255 system_pods.go:61] "kube-controller-manager-newest-cni-408495" [134b6611-b670-44be-9bdf-a2258c3c7bed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 19:45:16.220050  491255 system_pods.go:61] "kube-proxy-lh68f" [cd2f50b1-a314-43cb-a543-15ab3396db7e] Running
	I1016 19:45:16.220057  491255 system_pods.go:61] "kube-scheduler-newest-cni-408495" [7955ac6a-cda9-4c86-a5e4-990606dfbb0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 19:45:16.220062  491255 system_pods.go:61] "storage-provisioner" [af091ec8-8f1b-458e-916f-2232da7ac31a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1016 19:45:16.220068  491255 system_pods.go:74] duration metric: took 3.827383ms to wait for pod list to return data ...
	I1016 19:45:16.220076  491255 default_sa.go:34] waiting for default service account to be created ...
	I1016 19:45:16.223201  491255 default_sa.go:45] found service account: "default"
	I1016 19:45:16.223223  491255 default_sa.go:55] duration metric: took 3.141246ms for default service account to be created ...
	I1016 19:45:16.223235  491255 kubeadm.go:586] duration metric: took 1.334321682s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1016 19:45:16.223252  491255 node_conditions.go:102] verifying NodePressure condition ...
	I1016 19:45:16.227159  491255 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 19:45:16.227264  491255 node_conditions.go:123] node cpu capacity is 2
	I1016 19:45:16.227293  491255 node_conditions.go:105] duration metric: took 4.035664ms to run NodePressure ...
	I1016 19:45:16.227345  491255 start.go:241] waiting for startup goroutines ...
	I1016 19:45:16.478557  491255 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-408495" context rescaled to 1 replicas
	I1016 19:45:16.478599  491255 start.go:246] waiting for cluster config update ...
	I1016 19:45:16.478611  491255 start.go:255] writing updated cluster config ...
	I1016 19:45:16.478941  491255 ssh_runner.go:195] Run: rm -f paused
	I1016 19:45:16.544435  491255 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1016 19:45:16.547826  491255 out.go:179] * Done! kubectl is now configured to use "newest-cni-408495" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.110320201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.119106417Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f726559c-c799-4b95-8e2e-7bf79288c5fd name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.134596093Z" level=info msg="Ran pod sandbox 3a3ef029947d762dd70bc3461b2366f624a0aaf3795c83039938167f26bdd209 with infra container: kube-system/kube-proxy-lh68f/POD" id=f726559c-c799-4b95-8e2e-7bf79288c5fd name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.138317317Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=cf79c211-1c73-478d-baf6-bbe5ec0e19e1 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.143560215Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d571425d-b3e3-4b46-90bb-1a6fd856b33c name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.150376005Z" level=info msg="Creating container: kube-system/kube-proxy-lh68f/kube-proxy" id=f3aec6c0-5fbc-405c-8c7f-e1fbde08ada8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.150748571Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.204752052Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.209532471Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.280313365Z" level=info msg="Created container 6e7ff2a25211a5ce69e4b19a059dc8e1b11f2a325b25dff961541602814c83e9: kube-system/kube-proxy-lh68f/kube-proxy" id=f3aec6c0-5fbc-405c-8c7f-e1fbde08ada8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.281673027Z" level=info msg="Starting container: 6e7ff2a25211a5ce69e4b19a059dc8e1b11f2a325b25dff961541602814c83e9" id=4ce3bdd5-1c59-4939-8819-6f9ea9928ef6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.300477933Z" level=info msg="Started container" PID=1452 containerID=6e7ff2a25211a5ce69e4b19a059dc8e1b11f2a325b25dff961541602814c83e9 description=kube-system/kube-proxy-lh68f/kube-proxy id=4ce3bdd5-1c59-4939-8819-6f9ea9928ef6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a3ef029947d762dd70bc3461b2366f624a0aaf3795c83039938167f26bdd209
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.361223136Z" level=info msg="Running pod sandbox: kube-system/kindnet-9sr6p/POD" id=7768c113-73bf-4a8e-8ff6-15edc9c327f0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.36128264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.367944918Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7768c113-73bf-4a8e-8ff6-15edc9c327f0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.383560176Z" level=info msg="Ran pod sandbox d3d600cc46c9ca809e8f1a9905e58ac9dd5382f34df59dfdb5b4fc562f83ef56 with infra container: kube-system/kindnet-9sr6p/POD" id=7768c113-73bf-4a8e-8ff6-15edc9c327f0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.387966371Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2b6c6242-2aac-4d78-8297-fe8ce1de0bd6 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.389317853Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=c120ae7f-0232-463e-86f7-76fe043057b2 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.394908972Z" level=info msg="Creating container: kube-system/kindnet-9sr6p/kindnet-cni" id=6d30db4f-9796-413e-8bc6-263bf4d3608f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.395271717Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.40263765Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.407974671Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.427083309Z" level=info msg="Created container 25911cb1b419edbb49b50ddeecf9e5968dfea93d0957be6a3aa7c45b73eff9c6: kube-system/kindnet-9sr6p/kindnet-cni" id=6d30db4f-9796-413e-8bc6-263bf4d3608f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.430294324Z" level=info msg="Starting container: 25911cb1b419edbb49b50ddeecf9e5968dfea93d0957be6a3aa7c45b73eff9c6" id=50446f86-c2fd-466c-b4c1-e0eda41edef1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:45:15 newest-cni-408495 crio[840]: time="2025-10-16T19:45:15.440082447Z" level=info msg="Started container" PID=1510 containerID=25911cb1b419edbb49b50ddeecf9e5968dfea93d0957be6a3aa7c45b73eff9c6 description=kube-system/kindnet-9sr6p/kindnet-cni id=50446f86-c2fd-466c-b4c1-e0eda41edef1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d3d600cc46c9ca809e8f1a9905e58ac9dd5382f34df59dfdb5b4fc562f83ef56
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	25911cb1b419e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   d3d600cc46c9c       kindnet-9sr6p                               kube-system
	6e7ff2a25211a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   3a3ef029947d7       kube-proxy-lh68f                            kube-system
	0506175c97ebb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   0                   52bffabf68f7a       kube-controller-manager-newest-cni-408495   kube-system
	95b04322a3b24       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            0                   8c58ee7236281       kube-apiserver-newest-cni-408495            kube-system
	50b0c95e0e2c4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      0                   4b0c63fb7a030       etcd-newest-cni-408495                      kube-system
	df9b273a1a5dd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            0                   e1af8e7fa5cb0       kube-scheduler-newest-cni-408495            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-408495
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-408495
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=newest-cni-408495
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T19_45_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 19:45:06 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-408495
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:45:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:45:09 +0000   Thu, 16 Oct 2025 19:45:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:45:09 +0000   Thu, 16 Oct 2025 19:45:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:45:09 +0000   Thu, 16 Oct 2025 19:45:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 16 Oct 2025 19:45:09 +0000   Thu, 16 Oct 2025 19:45:02 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-408495
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                2b847ec6-788c-498e-9669-d3802c2dcb5e
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-408495                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8s
	  kube-system                 kindnet-9sr6p                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-408495             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-408495    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 kube-proxy-lh68f                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-408495             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  16s (x8 over 16s)  kubelet          Node newest-cni-408495 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16s (x8 over 16s)  kubelet          Node newest-cni-408495 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16s (x8 over 16s)  kubelet          Node newest-cni-408495 status is now: NodeHasSufficientPID
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-408495 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-408495 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s                 kubelet          Node newest-cni-408495 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-408495 event: Registered Node newest-cni-408495 in Controller
	
	
	==> dmesg <==
	[Oct16 19:21] overlayfs: idmapped layers are currently not supported
	[Oct16 19:22] overlayfs: idmapped layers are currently not supported
	[  +5.025487] overlayfs: idmapped layers are currently not supported
	[Oct16 19:23] overlayfs: idmapped layers are currently not supported
	[ +28.397927] overlayfs: idmapped layers are currently not supported
	[Oct16 19:24] overlayfs: idmapped layers are currently not supported
	[ +25.533019] overlayfs: idmapped layers are currently not supported
	[Oct16 19:26] overlayfs: idmapped layers are currently not supported
	[Oct16 19:27] overlayfs: idmapped layers are currently not supported
	[Oct16 19:29] overlayfs: idmapped layers are currently not supported
	[Oct16 19:31] overlayfs: idmapped layers are currently not supported
	[Oct16 19:32] overlayfs: idmapped layers are currently not supported
	[Oct16 19:34] overlayfs: idmapped layers are currently not supported
	[Oct16 19:36] overlayfs: idmapped layers are currently not supported
	[Oct16 19:37] overlayfs: idmapped layers are currently not supported
	[  +8.490329] overlayfs: idmapped layers are currently not supported
	[Oct16 19:38] overlayfs: idmapped layers are currently not supported
	[Oct16 19:39] overlayfs: idmapped layers are currently not supported
	[Oct16 19:40] overlayfs: idmapped layers are currently not supported
	[Oct16 19:41] overlayfs: idmapped layers are currently not supported
	[ +20.605853] overlayfs: idmapped layers are currently not supported
	[Oct16 19:43] overlayfs: idmapped layers are currently not supported
	[ +20.110477] overlayfs: idmapped layers are currently not supported
	[Oct16 19:44] overlayfs: idmapped layers are currently not supported
	[Oct16 19:45] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [50b0c95e0e2c42ae965450a2dcfd329136a71228d01ddec7883de0e8626e6f19] <==
	{"level":"warn","ts":"2025-10-16T19:45:05.217338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.237943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.258300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.271246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.288492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.314601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.336357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.362478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.368831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.394917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.418954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.435280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.451033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.468638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.485414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.508022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.520595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.537069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.554302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.571491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.595976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.624140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.642102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.660464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:05.753375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34946","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:45:18 up  2:27,  0 user,  load average: 3.27, 3.47, 2.99
	Linux newest-cni-408495 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [25911cb1b419edbb49b50ddeecf9e5968dfea93d0957be6a3aa7c45b73eff9c6] <==
	I1016 19:45:15.522628       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 19:45:15.605460       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1016 19:45:15.605607       1 main.go:148] setting mtu 1500 for CNI 
	I1016 19:45:15.605620       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 19:45:15.605635       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T19:45:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 19:45:15.806474       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 19:45:15.806678       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 19:45:15.806695       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 19:45:15.806884       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [95b04322a3b24e0bc9fef3e189ce8d9c6deb5da5341449ac7fd4ba1f4a5284d7] <==
	I1016 19:45:06.917749       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1016 19:45:06.918359       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1016 19:45:06.918475       1 aggregator.go:171] initial CRD sync complete...
	I1016 19:45:06.918583       1 autoregister_controller.go:144] Starting autoregister controller
	I1016 19:45:06.918613       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 19:45:06.918640       1 cache.go:39] Caches are synced for autoregister controller
	I1016 19:45:06.927334       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 19:45:07.119432       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 19:45:07.541335       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1016 19:45:07.549361       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1016 19:45:07.549384       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 19:45:08.346312       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 19:45:08.399256       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 19:45:08.563568       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1016 19:45:08.572823       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1016 19:45:08.574152       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 19:45:08.579511       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 19:45:08.823321       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 19:45:09.539539       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 19:45:09.560297       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1016 19:45:09.592607       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1016 19:45:14.573587       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 19:45:14.579326       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 19:45:14.722326       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1016 19:45:14.911380       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0506175c97ebb97da55940bc8df748143a5a356174a70aa4d9449aba982e0ac5] <==
	I1016 19:45:13.948167       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:45:13.948183       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1016 19:45:13.950871       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1016 19:45:13.950945       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1016 19:45:13.950979       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1016 19:45:13.951003       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1016 19:45:13.951015       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1016 19:45:13.960642       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-408495" podCIDRs=["10.42.0.0/24"]
	I1016 19:45:13.964663       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1016 19:45:13.965060       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1016 19:45:13.965113       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1016 19:45:13.965257       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1016 19:45:13.968728       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1016 19:45:13.968817       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1016 19:45:13.968881       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-408495"
	I1016 19:45:13.968932       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1016 19:45:13.969204       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 19:45:13.970117       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1016 19:45:13.970173       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 19:45:13.970175       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1016 19:45:13.974242       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:45:13.974313       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1016 19:45:14.013421       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:45:14.013449       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 19:45:14.013465       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [6e7ff2a25211a5ce69e4b19a059dc8e1b11f2a325b25dff961541602814c83e9] <==
	I1016 19:45:15.357383       1 server_linux.go:53] "Using iptables proxy"
	I1016 19:45:15.434719       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 19:45:15.540635       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 19:45:15.540678       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1016 19:45:15.540773       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 19:45:15.578017       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 19:45:15.578076       1 server_linux.go:132] "Using iptables Proxier"
	I1016 19:45:15.585413       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 19:45:15.585728       1 server.go:527] "Version info" version="v1.34.1"
	I1016 19:45:15.585740       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:45:15.586840       1 config.go:200] "Starting service config controller"
	I1016 19:45:15.586851       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 19:45:15.596143       1 config.go:106] "Starting endpoint slice config controller"
	I1016 19:45:15.596162       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 19:45:15.596185       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 19:45:15.596190       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 19:45:15.599271       1 config.go:309] "Starting node config controller"
	I1016 19:45:15.599286       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 19:45:15.599294       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 19:45:15.688740       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 19:45:15.698243       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 19:45:15.698795       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [df9b273a1a5dd2fac30f770badeaa9f69694b512922f0b12e8e94412e4aed9aa] <==
	I1016 19:45:07.170922       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 19:45:07.171900       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:45:07.171931       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:45:07.171952       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1016 19:45:07.174799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 19:45:07.185196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1016 19:45:07.185544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 19:45:07.185609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 19:45:07.185670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 19:45:07.185778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 19:45:07.185824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 19:45:07.185866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 19:45:07.185925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 19:45:07.185980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 19:45:07.186029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 19:45:07.186093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 19:45:07.186152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 19:45:07.186197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 19:45:07.186243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 19:45:07.186288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 19:45:07.186407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 19:45:07.186954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 19:45:07.187353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 19:45:08.045391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1016 19:45:08.672339       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 19:45:10 newest-cni-408495 kubelet[1309]: I1016 19:45:10.504382    1309 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 16 19:45:10 newest-cni-408495 kubelet[1309]: I1016 19:45:10.602742    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-408495" podStartSLOduration=1.602723041 podStartE2EDuration="1.602723041s" podCreationTimestamp="2025-10-16 19:45:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:45:10.583270137 +0000 UTC m=+1.198376408" watchObservedRunningTime="2025-10-16 19:45:10.602723041 +0000 UTC m=+1.217829304"
	Oct 16 19:45:10 newest-cni-408495 kubelet[1309]: I1016 19:45:10.617724    1309 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-408495"
	Oct 16 19:45:10 newest-cni-408495 kubelet[1309]: I1016 19:45:10.617898    1309 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-408495"
	Oct 16 19:45:10 newest-cni-408495 kubelet[1309]: I1016 19:45:10.622907    1309 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-408495"
	Oct 16 19:45:10 newest-cni-408495 kubelet[1309]: I1016 19:45:10.623470    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-408495" podStartSLOduration=3.623457609 podStartE2EDuration="3.623457609s" podCreationTimestamp="2025-10-16 19:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:45:10.603348632 +0000 UTC m=+1.218454895" watchObservedRunningTime="2025-10-16 19:45:10.623457609 +0000 UTC m=+1.238563880"
	Oct 16 19:45:10 newest-cni-408495 kubelet[1309]: E1016 19:45:10.671869    1309 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-408495\" already exists" pod="kube-system/kube-controller-manager-newest-cni-408495"
	Oct 16 19:45:10 newest-cni-408495 kubelet[1309]: E1016 19:45:10.672115    1309 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-408495\" already exists" pod="kube-system/kube-scheduler-newest-cni-408495"
	Oct 16 19:45:10 newest-cni-408495 kubelet[1309]: E1016 19:45:10.672336    1309 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-408495\" already exists" pod="kube-system/etcd-newest-cni-408495"
	Oct 16 19:45:10 newest-cni-408495 kubelet[1309]: I1016 19:45:10.671545    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-408495" podStartSLOduration=1.671528517 podStartE2EDuration="1.671528517s" podCreationTimestamp="2025-10-16 19:45:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:45:10.623705037 +0000 UTC m=+1.238811308" watchObservedRunningTime="2025-10-16 19:45:10.671528517 +0000 UTC m=+1.286634788"
	Oct 16 19:45:10 newest-cni-408495 kubelet[1309]: I1016 19:45:10.693027    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-408495" podStartSLOduration=1.693008094 podStartE2EDuration="1.693008094s" podCreationTimestamp="2025-10-16 19:45:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:45:10.672844626 +0000 UTC m=+1.287950889" watchObservedRunningTime="2025-10-16 19:45:10.693008094 +0000 UTC m=+1.308114365"
	Oct 16 19:45:13 newest-cni-408495 kubelet[1309]: I1016 19:45:13.991776    1309 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 16 19:45:13 newest-cni-408495 kubelet[1309]: I1016 19:45:13.992432    1309 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 16 19:45:14 newest-cni-408495 kubelet[1309]: I1016 19:45:14.839845    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02d047e5-f3d9-4ab8-8c5d-70f6efb82f39-xtables-lock\") pod \"kindnet-9sr6p\" (UID: \"02d047e5-f3d9-4ab8-8c5d-70f6efb82f39\") " pod="kube-system/kindnet-9sr6p"
	Oct 16 19:45:14 newest-cni-408495 kubelet[1309]: I1016 19:45:14.839894    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd2f50b1-a314-43cb-a543-15ab3396db7e-lib-modules\") pod \"kube-proxy-lh68f\" (UID: \"cd2f50b1-a314-43cb-a543-15ab3396db7e\") " pod="kube-system/kube-proxy-lh68f"
	Oct 16 19:45:14 newest-cni-408495 kubelet[1309]: I1016 19:45:14.839917    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/02d047e5-f3d9-4ab8-8c5d-70f6efb82f39-cni-cfg\") pod \"kindnet-9sr6p\" (UID: \"02d047e5-f3d9-4ab8-8c5d-70f6efb82f39\") " pod="kube-system/kindnet-9sr6p"
	Oct 16 19:45:14 newest-cni-408495 kubelet[1309]: I1016 19:45:14.839945    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02d047e5-f3d9-4ab8-8c5d-70f6efb82f39-lib-modules\") pod \"kindnet-9sr6p\" (UID: \"02d047e5-f3d9-4ab8-8c5d-70f6efb82f39\") " pod="kube-system/kindnet-9sr6p"
	Oct 16 19:45:14 newest-cni-408495 kubelet[1309]: I1016 19:45:14.839964    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd2f50b1-a314-43cb-a543-15ab3396db7e-xtables-lock\") pod \"kube-proxy-lh68f\" (UID: \"cd2f50b1-a314-43cb-a543-15ab3396db7e\") " pod="kube-system/kube-proxy-lh68f"
	Oct 16 19:45:14 newest-cni-408495 kubelet[1309]: I1016 19:45:14.839982    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg4nc\" (UniqueName: \"kubernetes.io/projected/02d047e5-f3d9-4ab8-8c5d-70f6efb82f39-kube-api-access-fg4nc\") pod \"kindnet-9sr6p\" (UID: \"02d047e5-f3d9-4ab8-8c5d-70f6efb82f39\") " pod="kube-system/kindnet-9sr6p"
	Oct 16 19:45:14 newest-cni-408495 kubelet[1309]: I1016 19:45:14.840001    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cd2f50b1-a314-43cb-a543-15ab3396db7e-kube-proxy\") pod \"kube-proxy-lh68f\" (UID: \"cd2f50b1-a314-43cb-a543-15ab3396db7e\") " pod="kube-system/kube-proxy-lh68f"
	Oct 16 19:45:14 newest-cni-408495 kubelet[1309]: I1016 19:45:14.840017    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md5kx\" (UniqueName: \"kubernetes.io/projected/cd2f50b1-a314-43cb-a543-15ab3396db7e-kube-api-access-md5kx\") pod \"kube-proxy-lh68f\" (UID: \"cd2f50b1-a314-43cb-a543-15ab3396db7e\") " pod="kube-system/kube-proxy-lh68f"
	Oct 16 19:45:15 newest-cni-408495 kubelet[1309]: I1016 19:45:15.053583    1309 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 16 19:45:15 newest-cni-408495 kubelet[1309]: W1016 19:45:15.128653    1309 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84/crio-3a3ef029947d762dd70bc3461b2366f624a0aaf3795c83039938167f26bdd209 WatchSource:0}: Error finding container 3a3ef029947d762dd70bc3461b2366f624a0aaf3795c83039938167f26bdd209: Status 404 returned error can't find the container with id 3a3ef029947d762dd70bc3461b2366f624a0aaf3795c83039938167f26bdd209
	Oct 16 19:45:15 newest-cni-408495 kubelet[1309]: W1016 19:45:15.380116    1309 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84/crio-d3d600cc46c9ca809e8f1a9905e58ac9dd5382f34df59dfdb5b4fc562f83ef56 WatchSource:0}: Error finding container d3d600cc46c9ca809e8f1a9905e58ac9dd5382f34df59dfdb5b4fc562f83ef56: Status 404 returned error can't find the container with id d3d600cc46c9ca809e8f1a9905e58ac9dd5382f34df59dfdb5b4fc562f83ef56
	Oct 16 19:45:15 newest-cni-408495 kubelet[1309]: I1016 19:45:15.682622    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9sr6p" podStartSLOduration=1.682602452 podStartE2EDuration="1.682602452s" podCreationTimestamp="2025-10-16 19:45:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:45:15.6545452 +0000 UTC m=+6.269651471" watchObservedRunningTime="2025-10-16 19:45:15.682602452 +0000 UTC m=+6.297708715"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-408495 -n newest-cni-408495
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-408495 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-wd562 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-408495 describe pod coredns-66bc5c9577-wd562 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-408495 describe pod coredns-66bc5c9577-wd562 storage-provisioner: exit status 1 (82.095305ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-wd562" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-408495 describe pod coredns-66bc5c9577-wd562 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-408495 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-408495 --alsologtostderr -v=1: exit status 80 (1.93005837s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-408495 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 19:45:36.130200  496662 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:45:36.130539  496662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:45:36.130583  496662 out.go:374] Setting ErrFile to fd 2...
	I1016 19:45:36.130620  496662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:45:36.131070  496662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:45:36.131610  496662 out.go:368] Setting JSON to false
	I1016 19:45:36.131698  496662 mustload.go:65] Loading cluster: newest-cni-408495
	I1016 19:45:36.132261  496662 config.go:182] Loaded profile config "newest-cni-408495": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:45:36.133467  496662 cli_runner.go:164] Run: docker container inspect newest-cni-408495 --format={{.State.Status}}
	I1016 19:45:36.151660  496662 host.go:66] Checking if "newest-cni-408495" exists ...
	I1016 19:45:36.152007  496662 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:45:36.218253  496662 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-16 19:45:36.208208228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:45:36.219173  496662 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-408495 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1016 19:45:36.222643  496662 out.go:179] * Pausing node newest-cni-408495 ... 
	I1016 19:45:36.225500  496662 host.go:66] Checking if "newest-cni-408495" exists ...
	I1016 19:45:36.225925  496662 ssh_runner.go:195] Run: systemctl --version
	I1016 19:45:36.225981  496662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:36.245849  496662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:45:36.356494  496662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:45:36.378272  496662 pause.go:52] kubelet running: true
	I1016 19:45:36.378358  496662 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 19:45:36.612433  496662 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 19:45:36.612534  496662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 19:45:36.684530  496662 cri.go:89] found id: "ef8a9127987ec8ec7bb5370fdad4100dd60ca99aa9f188e79a6a27cd8b18e4da"
	I1016 19:45:36.684550  496662 cri.go:89] found id: "0cc8f8b2d746b739934597db57cb073031ee1ef32eb1a6ad68152ce32d363ebc"
	I1016 19:45:36.684555  496662 cri.go:89] found id: "e4f3e3fd9a25fd6f2aeca07c188cfe599751fb591689a318653e360958e27cf5"
	I1016 19:45:36.684558  496662 cri.go:89] found id: "2b96988c62b19f605ebea6bc4b48cd7579b71a62b59f9d6d042e6bd8a3b8bb2e"
	I1016 19:45:36.684561  496662 cri.go:89] found id: "74f48fee211f7b365a3bee8a063b590d0eea60c3639cde2f3e7f1bd036d8f440"
	I1016 19:45:36.684564  496662 cri.go:89] found id: "7b7239d3b6dbc021205aef879390811244649e59d88ccb4c88a903b9ced2779b"
	I1016 19:45:36.684572  496662 cri.go:89] found id: ""
	I1016 19:45:36.684625  496662 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 19:45:36.695660  496662 retry.go:31] will retry after 132.142992ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:45:36Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:45:36.829064  496662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:45:36.859010  496662 pause.go:52] kubelet running: false
	I1016 19:45:36.859077  496662 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 19:45:37.062495  496662 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 19:45:37.062570  496662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 19:45:37.151221  496662 cri.go:89] found id: "ef8a9127987ec8ec7bb5370fdad4100dd60ca99aa9f188e79a6a27cd8b18e4da"
	I1016 19:45:37.151248  496662 cri.go:89] found id: "0cc8f8b2d746b739934597db57cb073031ee1ef32eb1a6ad68152ce32d363ebc"
	I1016 19:45:37.151253  496662 cri.go:89] found id: "e4f3e3fd9a25fd6f2aeca07c188cfe599751fb591689a318653e360958e27cf5"
	I1016 19:45:37.151257  496662 cri.go:89] found id: "2b96988c62b19f605ebea6bc4b48cd7579b71a62b59f9d6d042e6bd8a3b8bb2e"
	I1016 19:45:37.151261  496662 cri.go:89] found id: "74f48fee211f7b365a3bee8a063b590d0eea60c3639cde2f3e7f1bd036d8f440"
	I1016 19:45:37.151267  496662 cri.go:89] found id: "7b7239d3b6dbc021205aef879390811244649e59d88ccb4c88a903b9ced2779b"
	I1016 19:45:37.151271  496662 cri.go:89] found id: ""
	I1016 19:45:37.151321  496662 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 19:45:37.163592  496662 retry.go:31] will retry after 486.624782ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:45:37Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:45:37.651297  496662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:45:37.664368  496662 pause.go:52] kubelet running: false
	I1016 19:45:37.664441  496662 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 19:45:37.871191  496662 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 19:45:37.871328  496662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 19:45:37.936508  496662 cri.go:89] found id: "ef8a9127987ec8ec7bb5370fdad4100dd60ca99aa9f188e79a6a27cd8b18e4da"
	I1016 19:45:37.936566  496662 cri.go:89] found id: "0cc8f8b2d746b739934597db57cb073031ee1ef32eb1a6ad68152ce32d363ebc"
	I1016 19:45:37.936585  496662 cri.go:89] found id: "e4f3e3fd9a25fd6f2aeca07c188cfe599751fb591689a318653e360958e27cf5"
	I1016 19:45:37.936605  496662 cri.go:89] found id: "2b96988c62b19f605ebea6bc4b48cd7579b71a62b59f9d6d042e6bd8a3b8bb2e"
	I1016 19:45:37.936625  496662 cri.go:89] found id: "74f48fee211f7b365a3bee8a063b590d0eea60c3639cde2f3e7f1bd036d8f440"
	I1016 19:45:37.936654  496662 cri.go:89] found id: "7b7239d3b6dbc021205aef879390811244649e59d88ccb4c88a903b9ced2779b"
	I1016 19:45:37.936677  496662 cri.go:89] found id: ""
	I1016 19:45:37.936739  496662 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 19:45:37.959926  496662 out.go:203] 
	W1016 19:45:37.963100  496662 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:45:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:45:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 19:45:37.963269  496662 out.go:285] * 
	* 
	W1016 19:45:37.976057  496662 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 19:45:37.979250  496662 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-408495 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-408495
helpers_test.go:243: (dbg) docker inspect newest-cni-408495:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84",
	        "Created": "2025-10-16T19:44:41.200270265Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 495038,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T19:45:20.80945185Z",
	            "FinishedAt": "2025-10-16T19:45:19.868409158Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84/hosts",
	        "LogPath": "/var/lib/docker/containers/fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84/fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84-json.log",
	        "Name": "/newest-cni-408495",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-408495:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-408495",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84",
	                "LowerDir": "/var/lib/docker/overlay2/a62320e2d2184bb8592ab3447890777471b3d5ecc07825c30e50a8feaf660a01-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a62320e2d2184bb8592ab3447890777471b3d5ecc07825c30e50a8feaf660a01/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a62320e2d2184bb8592ab3447890777471b3d5ecc07825c30e50a8feaf660a01/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a62320e2d2184bb8592ab3447890777471b3d5ecc07825c30e50a8feaf660a01/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-408495",
	                "Source": "/var/lib/docker/volumes/newest-cni-408495/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-408495",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-408495",
	                "name.minikube.sigs.k8s.io": "newest-cni-408495",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb884fe29ed1e13aaaad3c740a2fb242896f930092f85f03faa4d019cbd702c0",
	            "SandboxKey": "/var/run/docker/netns/fb884fe29ed1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-408495": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:54:06:3e:1e:86",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f3e824b0b22d0962642ad84d54a8f1c5049220ee34215d539c66435401df6a38",
	                    "EndpointID": "a773b50ff80b5dd3826437832570c762c8a9c8c00888a60568f97c4a8817afb0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-408495",
	                        "fc99bb32a05a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-408495 -n newest-cni-408495
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-408495 -n newest-cni-408495: exit status 2 (453.219169ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-408495 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-408495 logs -n 25: (1.136419309s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p no-preload-225696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:42 UTC │                     │
	│ stop    │ -p no-preload-225696 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:42 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable dashboard -p no-preload-225696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ start   │ -p no-preload-225696 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-751669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │                     │
	│ stop    │ -p embed-certs-751669 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-751669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ start   │ -p embed-certs-751669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:44 UTC │
	│ image   │ no-preload-225696 image list --format=json                                                                                                                                                                                                    │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ pause   │ -p no-preload-225696 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	│ delete  │ -p no-preload-225696                                                                                                                                                                                                                          │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p no-preload-225696                                                                                                                                                                                                                          │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p disable-driver-mounts-031282                                                                                                                                                                                                               │ disable-driver-mounts-031282 │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ start   │ -p default-k8s-diff-port-850436 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	│ image   │ embed-certs-751669 image list --format=json                                                                                                                                                                                                   │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ pause   │ -p embed-certs-751669 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	│ delete  │ -p embed-certs-751669                                                                                                                                                                                                                         │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p embed-certs-751669                                                                                                                                                                                                                         │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ start   │ -p newest-cni-408495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:45 UTC │
	│ addons  │ enable metrics-server -p newest-cni-408495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │                     │
	│ stop    │ -p newest-cni-408495 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ addons  │ enable dashboard -p newest-cni-408495 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ start   │ -p newest-cni-408495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ image   │ newest-cni-408495 image list --format=json                                                                                                                                                                                                    │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ pause   │ -p newest-cni-408495 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 19:45:20
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 19:45:20.526184  494907 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:45:20.526394  494907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:45:20.526424  494907 out.go:374] Setting ErrFile to fd 2...
	I1016 19:45:20.526444  494907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:45:20.526726  494907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:45:20.527181  494907 out.go:368] Setting JSON to false
	I1016 19:45:20.528285  494907 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8850,"bootTime":1760635071,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 19:45:20.528384  494907 start.go:141] virtualization:  
	I1016 19:45:20.531339  494907 out.go:179] * [newest-cni-408495] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 19:45:20.535213  494907 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 19:45:20.535294  494907 notify.go:220] Checking for updates...
	I1016 19:45:20.541250  494907 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 19:45:20.544214  494907 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:45:20.547042  494907 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 19:45:20.550017  494907 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 19:45:20.552978  494907 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 19:45:20.556647  494907 config.go:182] Loaded profile config "newest-cni-408495": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:45:20.557527  494907 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 19:45:20.597597  494907 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 19:45:20.597746  494907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:45:20.661299  494907 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-16 19:45:20.646086094 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:45:20.661404  494907 docker.go:318] overlay module found
	I1016 19:45:20.664682  494907 out.go:179] * Using the docker driver based on existing profile
	I1016 19:45:20.667553  494907 start.go:305] selected driver: docker
	I1016 19:45:20.667575  494907 start.go:925] validating driver "docker" against &{Name:newest-cni-408495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-408495 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:45:20.667693  494907 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 19:45:20.668402  494907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:45:20.725951  494907 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-16 19:45:20.716051227 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:45:20.726291  494907 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1016 19:45:20.726329  494907 cni.go:84] Creating CNI manager for ""
	I1016 19:45:20.726391  494907 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:45:20.726431  494907 start.go:349] cluster config:
	{Name:newest-cni-408495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-408495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:45:20.729669  494907 out.go:179] * Starting "newest-cni-408495" primary control-plane node in "newest-cni-408495" cluster
	I1016 19:45:20.732511  494907 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 19:45:20.735483  494907 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 19:45:20.738320  494907 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:45:20.738390  494907 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 19:45:20.738403  494907 cache.go:58] Caching tarball of preloaded images
	I1016 19:45:20.738413  494907 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 19:45:20.738500  494907 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 19:45:20.738511  494907 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 19:45:20.738632  494907 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/config.json ...
	I1016 19:45:20.758071  494907 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 19:45:20.758092  494907 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 19:45:20.758112  494907 cache.go:232] Successfully downloaded all kic artifacts
	I1016 19:45:20.758134  494907 start.go:360] acquireMachinesLock for newest-cni-408495: {Name:mk4f5bcb30afe2773f49aca4b6c534db2867d41f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:45:20.758191  494907 start.go:364] duration metric: took 39.336µs to acquireMachinesLock for "newest-cni-408495"
	I1016 19:45:20.758210  494907 start.go:96] Skipping create...Using existing machine configuration
	I1016 19:45:20.758216  494907 fix.go:54] fixHost starting: 
	I1016 19:45:20.758471  494907 cli_runner.go:164] Run: docker container inspect newest-cni-408495 --format={{.State.Status}}
	I1016 19:45:20.775271  494907 fix.go:112] recreateIfNeeded on newest-cni-408495: state=Stopped err=<nil>
	W1016 19:45:20.775298  494907 fix.go:138] unexpected machine state, will restart: <nil>
	W1016 19:45:21.433889  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	W1016 19:45:23.933570  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	I1016 19:45:20.778528  494907 out.go:252] * Restarting existing docker container for "newest-cni-408495" ...
	I1016 19:45:20.778611  494907 cli_runner.go:164] Run: docker start newest-cni-408495
	I1016 19:45:21.044201  494907 cli_runner.go:164] Run: docker container inspect newest-cni-408495 --format={{.State.Status}}
	I1016 19:45:21.069105  494907 kic.go:430] container "newest-cni-408495" state is running.
	I1016 19:45:21.069834  494907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-408495
	I1016 19:45:21.097296  494907 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/config.json ...
	I1016 19:45:21.097683  494907 machine.go:93] provisionDockerMachine start ...
	I1016 19:45:21.097813  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:21.122260  494907 main.go:141] libmachine: Using SSH client type: native
	I1016 19:45:21.122582  494907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1016 19:45:21.122593  494907 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 19:45:21.123261  494907 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 19:45:24.272845  494907 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-408495
	
	I1016 19:45:24.272875  494907 ubuntu.go:182] provisioning hostname "newest-cni-408495"
	I1016 19:45:24.272939  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:24.292955  494907 main.go:141] libmachine: Using SSH client type: native
	I1016 19:45:24.293343  494907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1016 19:45:24.293365  494907 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-408495 && echo "newest-cni-408495" | sudo tee /etc/hostname
	I1016 19:45:24.451367  494907 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-408495
	
	I1016 19:45:24.451450  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:24.469618  494907 main.go:141] libmachine: Using SSH client type: native
	I1016 19:45:24.469933  494907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1016 19:45:24.469959  494907 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-408495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-408495/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-408495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 19:45:24.617434  494907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 19:45:24.617462  494907 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 19:45:24.617482  494907 ubuntu.go:190] setting up certificates
	I1016 19:45:24.617495  494907 provision.go:84] configureAuth start
	I1016 19:45:24.617563  494907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-408495
	I1016 19:45:24.635718  494907 provision.go:143] copyHostCerts
	I1016 19:45:24.635790  494907 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 19:45:24.635814  494907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 19:45:24.635898  494907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 19:45:24.636007  494907 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 19:45:24.636018  494907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 19:45:24.636045  494907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 19:45:24.636111  494907 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 19:45:24.636120  494907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 19:45:24.636144  494907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 19:45:24.636198  494907 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.newest-cni-408495 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-408495]
	I1016 19:45:25.376985  494907 provision.go:177] copyRemoteCerts
	I1016 19:45:25.377068  494907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 19:45:25.377111  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:25.395831  494907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:45:25.496955  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 19:45:25.515722  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1016 19:45:25.534032  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 19:45:25.552169  494907 provision.go:87] duration metric: took 934.659269ms to configureAuth
	I1016 19:45:25.552197  494907 ubuntu.go:206] setting minikube options for container-runtime
	I1016 19:45:25.552400  494907 config.go:182] Loaded profile config "newest-cni-408495": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:45:25.552508  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:25.570945  494907 main.go:141] libmachine: Using SSH client type: native
	I1016 19:45:25.571255  494907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1016 19:45:25.571276  494907 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 19:45:25.870532  494907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 19:45:25.870555  494907 machine.go:96] duration metric: took 4.772858672s to provisionDockerMachine
	I1016 19:45:25.870566  494907 start.go:293] postStartSetup for "newest-cni-408495" (driver="docker")
	I1016 19:45:25.870576  494907 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 19:45:25.870637  494907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 19:45:25.870676  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:25.888389  494907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:45:25.992960  494907 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 19:45:25.996754  494907 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 19:45:25.996785  494907 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 19:45:25.996798  494907 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 19:45:25.996855  494907 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 19:45:25.996945  494907 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 19:45:25.997059  494907 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 19:45:26.004677  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:45:26.028244  494907 start.go:296] duration metric: took 157.645403ms for postStartSetup
	I1016 19:45:26.028334  494907 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 19:45:26.028375  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:26.046401  494907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:45:26.146435  494907 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 19:45:26.151632  494907 fix.go:56] duration metric: took 5.393408657s for fixHost
	I1016 19:45:26.151658  494907 start.go:83] releasing machines lock for "newest-cni-408495", held for 5.393458733s
	I1016 19:45:26.151728  494907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-408495
	I1016 19:45:26.169405  494907 ssh_runner.go:195] Run: cat /version.json
	I1016 19:45:26.169435  494907 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 19:45:26.169464  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:26.169556  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:26.190497  494907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:45:26.192195  494907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:45:26.292712  494907 ssh_runner.go:195] Run: systemctl --version
	I1016 19:45:26.383112  494907 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 19:45:26.418674  494907 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 19:45:26.423067  494907 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 19:45:26.423138  494907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 19:45:26.430962  494907 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 19:45:26.430988  494907 start.go:495] detecting cgroup driver to use...
	I1016 19:45:26.431021  494907 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 19:45:26.431076  494907 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 19:45:26.449023  494907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 19:45:26.462080  494907 docker.go:218] disabling cri-docker service (if available) ...
	I1016 19:45:26.462184  494907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 19:45:26.478102  494907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 19:45:26.491046  494907 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 19:45:26.607329  494907 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 19:45:26.726347  494907 docker.go:234] disabling docker service ...
	I1016 19:45:26.726423  494907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 19:45:26.742156  494907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 19:45:26.756440  494907 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 19:45:26.890745  494907 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 19:45:27.018119  494907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 19:45:27.031781  494907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 19:45:27.045792  494907 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 19:45:27.045891  494907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:45:27.055234  494907 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 19:45:27.055361  494907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:45:27.063816  494907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:45:27.072422  494907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:45:27.082001  494907 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 19:45:27.090802  494907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:45:27.099836  494907 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:45:27.108376  494907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:45:27.117098  494907 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 19:45:27.124646  494907 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 19:45:27.132115  494907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:45:27.249213  494907 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 19:45:27.392790  494907 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 19:45:27.392934  494907 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 19:45:27.396912  494907 start.go:563] Will wait 60s for crictl version
	I1016 19:45:27.397032  494907 ssh_runner.go:195] Run: which crictl
	I1016 19:45:27.400875  494907 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 19:45:27.427381  494907 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 19:45:27.427572  494907 ssh_runner.go:195] Run: crio --version
	I1016 19:45:27.461537  494907 ssh_runner.go:195] Run: crio --version
	I1016 19:45:27.498777  494907 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 19:45:27.501656  494907 cli_runner.go:164] Run: docker network inspect newest-cni-408495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:45:27.518543  494907 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1016 19:45:27.522667  494907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:45:27.535298  494907 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1016 19:45:27.538110  494907 kubeadm.go:883] updating cluster {Name:newest-cni-408495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-408495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 19:45:27.538261  494907 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:45:27.538363  494907 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:45:27.581010  494907 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:45:27.581035  494907 crio.go:433] Images already preloaded, skipping extraction
	I1016 19:45:27.581098  494907 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:45:27.611013  494907 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:45:27.611036  494907 cache_images.go:85] Images are preloaded, skipping loading
	I1016 19:45:27.611043  494907 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1016 19:45:27.611142  494907 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-408495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-408495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 19:45:27.611227  494907 ssh_runner.go:195] Run: crio config
	I1016 19:45:27.674792  494907 cni.go:84] Creating CNI manager for ""
	I1016 19:45:27.674816  494907 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:45:27.674839  494907 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1016 19:45:27.674883  494907 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-408495 NodeName:newest-cni-408495 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 19:45:27.675051  494907 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-408495"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 19:45:27.675126  494907 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 19:45:27.682946  494907 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 19:45:27.683038  494907 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 19:45:27.690553  494907 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1016 19:45:27.703005  494907 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 19:45:27.715512  494907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1016 19:45:27.728071  494907 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1016 19:45:27.731454  494907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:45:27.740825  494907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:45:27.855473  494907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:45:27.870318  494907 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495 for IP: 192.168.85.2
	I1016 19:45:27.870393  494907 certs.go:195] generating shared ca certs ...
	I1016 19:45:27.870423  494907 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:45:27.870614  494907 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 19:45:27.870695  494907 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 19:45:27.870717  494907 certs.go:257] generating profile certs ...
	I1016 19:45:27.870937  494907 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/client.key
	I1016 19:45:27.871069  494907 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/apiserver.key.3eb76944
	I1016 19:45:27.871149  494907 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/proxy-client.key
	I1016 19:45:27.871301  494907 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 19:45:27.871379  494907 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 19:45:27.871404  494907 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 19:45:27.871468  494907 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 19:45:27.871521  494907 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 19:45:27.871568  494907 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 19:45:27.871645  494907 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:45:27.872507  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 19:45:27.897565  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 19:45:27.917289  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 19:45:27.941172  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 19:45:27.963061  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1016 19:45:27.987133  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 19:45:28.016002  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 19:45:28.045909  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1016 19:45:28.075608  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 19:45:28.100196  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 19:45:28.119801  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 19:45:28.144484  494907 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 19:45:28.158105  494907 ssh_runner.go:195] Run: openssl version
	I1016 19:45:28.164694  494907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 19:45:28.173831  494907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:45:28.177555  494907 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:45:28.177622  494907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:45:28.219121  494907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 19:45:28.226980  494907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 19:45:28.235374  494907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 19:45:28.239075  494907 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 19:45:28.239139  494907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 19:45:28.280053  494907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 19:45:28.288735  494907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 19:45:28.297195  494907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 19:45:28.300795  494907 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 19:45:28.300860  494907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 19:45:28.342231  494907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 19:45:28.350021  494907 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 19:45:28.353834  494907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 19:45:28.395999  494907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 19:45:28.437584  494907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 19:45:28.482287  494907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 19:45:28.540182  494907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 19:45:28.610990  494907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 19:45:28.714420  494907 kubeadm.go:400] StartCluster: {Name:newest-cni-408495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-408495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:45:28.714533  494907 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 19:45:28.714649  494907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 19:45:28.775050  494907 cri.go:89] found id: "e4f3e3fd9a25fd6f2aeca07c188cfe599751fb591689a318653e360958e27cf5"
	I1016 19:45:28.775086  494907 cri.go:89] found id: "2b96988c62b19f605ebea6bc4b48cd7579b71a62b59f9d6d042e6bd8a3b8bb2e"
	I1016 19:45:28.775092  494907 cri.go:89] found id: "74f48fee211f7b365a3bee8a063b590d0eea60c3639cde2f3e7f1bd036d8f440"
	I1016 19:45:28.775096  494907 cri.go:89] found id: "7b7239d3b6dbc021205aef879390811244649e59d88ccb4c88a903b9ced2779b"
	I1016 19:45:28.775100  494907 cri.go:89] found id: ""
	I1016 19:45:28.775182  494907 ssh_runner.go:195] Run: sudo runc list -f json
	W1016 19:45:28.789928  494907 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:45:28Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:45:28.790069  494907 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 19:45:28.805916  494907 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 19:45:28.805989  494907 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 19:45:28.806330  494907 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 19:45:28.820916  494907 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 19:45:28.821677  494907 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-408495" does not appear in /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:45:28.822015  494907 kubeconfig.go:62] /home/jenkins/minikube-integration/21738-288457/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-408495" cluster setting kubeconfig missing "newest-cni-408495" context setting]
	I1016 19:45:28.822663  494907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:45:28.824633  494907 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 19:45:28.832653  494907 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1016 19:45:28.832727  494907 kubeadm.go:601] duration metric: took 26.716941ms to restartPrimaryControlPlane
	I1016 19:45:28.832752  494907 kubeadm.go:402] duration metric: took 118.341901ms to StartCluster
	I1016 19:45:28.832781  494907 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:45:28.832860  494907 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:45:28.833830  494907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:45:28.834098  494907 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:45:28.834348  494907 config.go:182] Loaded profile config "newest-cni-408495": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:45:28.834296  494907 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 19:45:28.834523  494907 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-408495"
	I1016 19:45:28.834537  494907 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-408495"
	W1016 19:45:28.834543  494907 addons.go:247] addon storage-provisioner should already be in state true
	I1016 19:45:28.834566  494907 host.go:66] Checking if "newest-cni-408495" exists ...
	I1016 19:45:28.835029  494907 cli_runner.go:164] Run: docker container inspect newest-cni-408495 --format={{.State.Status}}
	I1016 19:45:28.835489  494907 addons.go:69] Setting default-storageclass=true in profile "newest-cni-408495"
	I1016 19:45:28.835522  494907 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-408495"
	I1016 19:45:28.835586  494907 addons.go:69] Setting dashboard=true in profile "newest-cni-408495"
	I1016 19:45:28.835602  494907 addons.go:238] Setting addon dashboard=true in "newest-cni-408495"
	W1016 19:45:28.835608  494907 addons.go:247] addon dashboard should already be in state true
	I1016 19:45:28.835635  494907 host.go:66] Checking if "newest-cni-408495" exists ...
	I1016 19:45:28.835818  494907 cli_runner.go:164] Run: docker container inspect newest-cni-408495 --format={{.State.Status}}
	I1016 19:45:28.836036  494907 cli_runner.go:164] Run: docker container inspect newest-cni-408495 --format={{.State.Status}}
	I1016 19:45:28.839074  494907 out.go:179] * Verifying Kubernetes components...
	I1016 19:45:28.847359  494907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:45:28.885052  494907 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 19:45:28.890516  494907 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:45:28.890538  494907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 19:45:28.890606  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:28.900449  494907 addons.go:238] Setting addon default-storageclass=true in "newest-cni-408495"
	W1016 19:45:28.900477  494907 addons.go:247] addon default-storageclass should already be in state true
	I1016 19:45:28.900525  494907 host.go:66] Checking if "newest-cni-408495" exists ...
	I1016 19:45:28.900955  494907 cli_runner.go:164] Run: docker container inspect newest-cni-408495 --format={{.State.Status}}
	I1016 19:45:28.904042  494907 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1016 19:45:28.912807  494907 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1016 19:45:25.933676  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	W1016 19:45:27.934292  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	I1016 19:45:28.917201  494907 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1016 19:45:28.917227  494907 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1016 19:45:28.917300  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:28.941223  494907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:45:28.941350  494907 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 19:45:28.941364  494907 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 19:45:28.941419  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:28.973679  494907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:45:28.980898  494907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:45:29.159442  494907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:45:29.205352  494907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:45:29.211364  494907 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1016 19:45:29.211436  494907 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1016 19:45:29.257405  494907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 19:45:29.280655  494907 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1016 19:45:29.280730  494907 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1016 19:45:29.342397  494907 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1016 19:45:29.342466  494907 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1016 19:45:29.402856  494907 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1016 19:45:29.402926  494907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1016 19:45:29.458295  494907 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1016 19:45:29.458369  494907 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1016 19:45:29.486967  494907 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1016 19:45:29.487043  494907 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1016 19:45:29.511012  494907 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1016 19:45:29.511091  494907 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1016 19:45:29.530341  494907 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1016 19:45:29.530413  494907 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1016 19:45:29.553346  494907 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1016 19:45:29.553426  494907 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1016 19:45:29.576562  494907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1016 19:45:29.936552  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	W1016 19:45:32.433276  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	W1016 19:45:34.433810  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	I1016 19:45:35.060463  494907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.900947898s)
	I1016 19:45:35.060523  494907 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.855100696s)
	I1016 19:45:35.060556  494907 api_server.go:52] waiting for apiserver process to appear ...
	I1016 19:45:35.060616  494907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 19:45:35.060695  494907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.803230214s)
	I1016 19:45:35.061114  494907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.484478717s)
	I1016 19:45:35.064096  494907 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-408495 addons enable metrics-server
	
	I1016 19:45:35.085121  494907 api_server.go:72] duration metric: took 6.25073476s to wait for apiserver process to appear ...
	I1016 19:45:35.085176  494907 api_server.go:88] waiting for apiserver healthz status ...
	I1016 19:45:35.085204  494907 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:45:35.095274  494907 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1016 19:45:35.096898  494907 api_server.go:141] control plane version: v1.34.1
	I1016 19:45:35.096979  494907 api_server.go:131] duration metric: took 11.794638ms to wait for apiserver health ...
	I1016 19:45:35.097016  494907 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 19:45:35.101500  494907 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1016 19:45:35.104629  494907 addons.go:514] duration metric: took 6.27033569s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1016 19:45:35.106197  494907 system_pods.go:59] 8 kube-system pods found
	I1016 19:45:35.106231  494907 system_pods.go:61] "coredns-66bc5c9577-wd562" [7e3e6903-1b13-40d0-91ee-345356eedde4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1016 19:45:35.106241  494907 system_pods.go:61] "etcd-newest-cni-408495" [0e5cebea-13bb-4784-9247-5a021cc3b89d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 19:45:35.106251  494907 system_pods.go:61] "kindnet-9sr6p" [02d047e5-f3d9-4ab8-8c5d-70f6efb82f39] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1016 19:45:35.106258  494907 system_pods.go:61] "kube-apiserver-newest-cni-408495" [a564226f-8d4d-4f8a-8129-116f7fde1dad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 19:45:35.106264  494907 system_pods.go:61] "kube-controller-manager-newest-cni-408495" [134b6611-b670-44be-9bdf-a2258c3c7bed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 19:45:35.106271  494907 system_pods.go:61] "kube-proxy-lh68f" [cd2f50b1-a314-43cb-a543-15ab3396db7e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1016 19:45:35.106278  494907 system_pods.go:61] "kube-scheduler-newest-cni-408495" [7955ac6a-cda9-4c86-a5e4-990606dfbb0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 19:45:35.106283  494907 system_pods.go:61] "storage-provisioner" [af091ec8-8f1b-458e-916f-2232da7ac31a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1016 19:45:35.106290  494907 system_pods.go:74] duration metric: took 9.241952ms to wait for pod list to return data ...
	I1016 19:45:35.106299  494907 default_sa.go:34] waiting for default service account to be created ...
	I1016 19:45:35.110168  494907 default_sa.go:45] found service account: "default"
	I1016 19:45:35.110197  494907 default_sa.go:55] duration metric: took 3.88505ms for default service account to be created ...
	I1016 19:45:35.110211  494907 kubeadm.go:586] duration metric: took 6.275830743s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1016 19:45:35.110229  494907 node_conditions.go:102] verifying NodePressure condition ...
	I1016 19:45:35.123008  494907 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 19:45:35.123097  494907 node_conditions.go:123] node cpu capacity is 2
	I1016 19:45:35.123126  494907 node_conditions.go:105] duration metric: took 12.889772ms to run NodePressure ...
	I1016 19:45:35.123173  494907 start.go:241] waiting for startup goroutines ...
	I1016 19:45:35.123201  494907 start.go:246] waiting for cluster config update ...
	I1016 19:45:35.123231  494907 start.go:255] writing updated cluster config ...
	I1016 19:45:35.123602  494907 ssh_runner.go:195] Run: rm -f paused
	I1016 19:45:35.239781  494907 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1016 19:45:35.243009  494907 out.go:179] * Done! kubectl is now configured to use "newest-cni-408495" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.284296194Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.293383173Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=40d99df0-323e-47f1-8d4d-106017f178b8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.295851222Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-lh68f/POD" id=2e238bd8-ae00-47fa-8ddc-7c25cbe927c5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.295936844Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.29985089Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=2e238bd8-ae00-47fa-8ddc-7c25cbe927c5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.302308043Z" level=info msg="Ran pod sandbox f71613153d16647ffe8f433bf34eef00acac5296cc75f9f056c3dac4ab92bda0 with infra container: kube-system/kindnet-9sr6p/POD" id=40d99df0-323e-47f1-8d4d-106017f178b8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.305893323Z" level=info msg="Ran pod sandbox 0ba0fd8f6bcfc5fcab79ce4a44d9e5f12ccf3747af87362f8e57d9f56181bced with infra container: kube-system/kube-proxy-lh68f/POD" id=2e238bd8-ae00-47fa-8ddc-7c25cbe927c5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.309641927Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3b5a9740-8cdc-4bda-85f3-82519b6a024e name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.311198384Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=eae6b123-afd1-4ed6-876b-97acbaa342f4 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.3125621Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5e8f495e-c896-4ed5-8e54-a01bbb1e330c name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.312953941Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=ac0405e6-5f6a-43d7-ab8b-c47e1f971a25 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.314393301Z" level=info msg="Creating container: kube-system/kindnet-9sr6p/kindnet-cni" id=341566df-4f8b-4766-b6ce-1e40dab65c40 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.314721469Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.315702042Z" level=info msg="Creating container: kube-system/kube-proxy-lh68f/kube-proxy" id=66841ab3-acf1-4857-9a6f-d9f92be5532f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.318021502Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.328794146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.337793224Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.340224144Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.340855749Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.373960882Z" level=info msg="Created container 0cc8f8b2d746b739934597db57cb073031ee1ef32eb1a6ad68152ce32d363ebc: kube-system/kube-proxy-lh68f/kube-proxy" id=66841ab3-acf1-4857-9a6f-d9f92be5532f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.374825368Z" level=info msg="Starting container: 0cc8f8b2d746b739934597db57cb073031ee1ef32eb1a6ad68152ce32d363ebc" id=a6ecfc01-76c7-44b7-9332-c7e4ab690603 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.377586704Z" level=info msg="Created container ef8a9127987ec8ec7bb5370fdad4100dd60ca99aa9f188e79a6a27cd8b18e4da: kube-system/kindnet-9sr6p/kindnet-cni" id=341566df-4f8b-4766-b6ce-1e40dab65c40 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.385774457Z" level=info msg="Starting container: ef8a9127987ec8ec7bb5370fdad4100dd60ca99aa9f188e79a6a27cd8b18e4da" id=29b8e0a9-41ed-4808-bfbe-7cd9836aa3ca name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.386431343Z" level=info msg="Started container" PID=1059 containerID=0cc8f8b2d746b739934597db57cb073031ee1ef32eb1a6ad68152ce32d363ebc description=kube-system/kube-proxy-lh68f/kube-proxy id=a6ecfc01-76c7-44b7-9332-c7e4ab690603 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0ba0fd8f6bcfc5fcab79ce4a44d9e5f12ccf3747af87362f8e57d9f56181bced
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.389472485Z" level=info msg="Started container" PID=1058 containerID=ef8a9127987ec8ec7bb5370fdad4100dd60ca99aa9f188e79a6a27cd8b18e4da description=kube-system/kindnet-9sr6p/kindnet-cni id=29b8e0a9-41ed-4808-bfbe-7cd9836aa3ca name=/runtime.v1.RuntimeService/StartContainer sandboxID=f71613153d16647ffe8f433bf34eef00acac5296cc75f9f056c3dac4ab92bda0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ef8a9127987ec       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 seconds ago       Running             kindnet-cni               1                   f71613153d166       kindnet-9sr6p                               kube-system
	0cc8f8b2d746b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 seconds ago       Running             kube-proxy                1                   0ba0fd8f6bcfc       kube-proxy-lh68f                            kube-system
	e4f3e3fd9a25f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   10 seconds ago      Running             kube-scheduler            1                   d643fc2f578c1       kube-scheduler-newest-cni-408495            kube-system
	2b96988c62b19       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   10 seconds ago      Running             etcd                      1                   449fcb2b9b5d3       etcd-newest-cni-408495                      kube-system
	74f48fee211f7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   10 seconds ago      Running             kube-controller-manager   1                   ee4f62598e572       kube-controller-manager-newest-cni-408495   kube-system
	7b7239d3b6dbc       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   10 seconds ago      Running             kube-apiserver            1                   f57168011e4e8       kube-apiserver-newest-cni-408495            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-408495
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-408495
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=newest-cni-408495
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T19_45_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 19:45:06 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-408495
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:45:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:45:33 +0000   Thu, 16 Oct 2025 19:45:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:45:33 +0000   Thu, 16 Oct 2025 19:45:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:45:33 +0000   Thu, 16 Oct 2025 19:45:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 16 Oct 2025 19:45:33 +0000   Thu, 16 Oct 2025 19:45:02 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-408495
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                2b847ec6-788c-498e-9669-d3802c2dcb5e
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-408495                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         30s
	  kube-system                 kindnet-9sr6p                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-newest-cni-408495             250m (12%)    0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-newest-cni-408495    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-lh68f                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-newest-cni-408495             100m (5%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 23s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node newest-cni-408495 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node newest-cni-408495 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     38s (x8 over 38s)  kubelet          Node newest-cni-408495 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    30s                kubelet          Node newest-cni-408495 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 30s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  30s                kubelet          Node newest-cni-408495 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     30s                kubelet          Node newest-cni-408495 status is now: NodeHasSufficientPID
	  Normal   Starting                 30s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           26s                node-controller  Node newest-cni-408495 event: Registered Node newest-cni-408495 in Controller
	  Normal   Starting                 12s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11s (x8 over 11s)  kubelet          Node newest-cni-408495 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11s (x8 over 11s)  kubelet          Node newest-cni-408495 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11s (x8 over 11s)  kubelet          Node newest-cni-408495 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2s                 node-controller  Node newest-cni-408495 event: Registered Node newest-cni-408495 in Controller
	
	
	==> dmesg <==
	[Oct16 19:22] overlayfs: idmapped layers are currently not supported
	[  +5.025487] overlayfs: idmapped layers are currently not supported
	[Oct16 19:23] overlayfs: idmapped layers are currently not supported
	[ +28.397927] overlayfs: idmapped layers are currently not supported
	[Oct16 19:24] overlayfs: idmapped layers are currently not supported
	[ +25.533019] overlayfs: idmapped layers are currently not supported
	[Oct16 19:26] overlayfs: idmapped layers are currently not supported
	[Oct16 19:27] overlayfs: idmapped layers are currently not supported
	[Oct16 19:29] overlayfs: idmapped layers are currently not supported
	[Oct16 19:31] overlayfs: idmapped layers are currently not supported
	[Oct16 19:32] overlayfs: idmapped layers are currently not supported
	[Oct16 19:34] overlayfs: idmapped layers are currently not supported
	[Oct16 19:36] overlayfs: idmapped layers are currently not supported
	[Oct16 19:37] overlayfs: idmapped layers are currently not supported
	[  +8.490329] overlayfs: idmapped layers are currently not supported
	[Oct16 19:38] overlayfs: idmapped layers are currently not supported
	[Oct16 19:39] overlayfs: idmapped layers are currently not supported
	[Oct16 19:40] overlayfs: idmapped layers are currently not supported
	[Oct16 19:41] overlayfs: idmapped layers are currently not supported
	[ +20.605853] overlayfs: idmapped layers are currently not supported
	[Oct16 19:43] overlayfs: idmapped layers are currently not supported
	[ +20.110477] overlayfs: idmapped layers are currently not supported
	[Oct16 19:44] overlayfs: idmapped layers are currently not supported
	[Oct16 19:45] overlayfs: idmapped layers are currently not supported
	[ +26.426905] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2b96988c62b19f605ebea6bc4b48cd7579b71a62b59f9d6d042e6bd8a3b8bb2e] <==
	{"level":"warn","ts":"2025-10-16T19:45:32.174759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.192808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.218459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.235911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.256296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.281229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.318758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.324455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.342112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.357399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.373850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.397950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.418347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.433640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.453062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.475251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.496692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.513456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.530332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.551358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.568898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.598713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.641935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.654948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.775705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35460","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:45:39 up  2:27,  0 user,  load average: 3.52, 3.49, 3.01
	Linux newest-cni-408495 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ef8a9127987ec8ec7bb5370fdad4100dd60ca99aa9f188e79a6a27cd8b18e4da] <==
	I1016 19:45:34.417815       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 19:45:34.421491       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1016 19:45:34.421628       1 main.go:148] setting mtu 1500 for CNI 
	I1016 19:45:34.421640       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 19:45:34.421652       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T19:45:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 19:45:34.625824       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 19:45:34.625872       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 19:45:34.625881       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 19:45:34.626581       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [7b7239d3b6dbc021205aef879390811244649e59d88ccb4c88a903b9ced2779b] <==
	I1016 19:45:33.778123       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1016 19:45:33.778390       1 aggregator.go:171] initial CRD sync complete...
	I1016 19:45:33.778402       1 autoregister_controller.go:144] Starting autoregister controller
	I1016 19:45:33.778410       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 19:45:33.778418       1 cache.go:39] Caches are synced for autoregister controller
	I1016 19:45:33.797777       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 19:45:33.801283       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1016 19:45:33.803542       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1016 19:45:33.810104       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1016 19:45:33.810141       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1016 19:45:33.816968       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1016 19:45:33.817065       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 19:45:33.817264       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1016 19:45:33.839915       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 19:45:34.037589       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 19:45:34.520485       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 19:45:34.703020       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 19:45:34.763922       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 19:45:34.816163       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 19:45:34.849177       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 19:45:34.947366       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.59.217"}
	I1016 19:45:34.970434       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.163.155"}
	I1016 19:45:36.976602       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 19:45:37.325915       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 19:45:37.525872       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [74f48fee211f7b365a3bee8a063b590d0eea60c3639cde2f3e7f1bd036d8f440] <==
	I1016 19:45:36.974048       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 19:45:36.976134       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:45:36.976226       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1016 19:45:36.982139       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:45:36.982221       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 19:45:36.982252       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 19:45:36.982364       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1016 19:45:36.983781       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1016 19:45:36.989371       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1016 19:45:36.990612       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1016 19:45:37.007666       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:45:37.007949       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1016 19:45:37.020209       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1016 19:45:37.021258       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1016 19:45:37.021372       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 19:45:37.021621       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1016 19:45:37.021802       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1016 19:45:37.021888       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1016 19:45:37.022186       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 19:45:37.022382       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1016 19:45:37.023789       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-408495"
	I1016 19:45:37.023921       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1016 19:45:37.028502       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1016 19:45:37.029226       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:45:37.029430       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-proxy [0cc8f8b2d746b739934597db57cb073031ee1ef32eb1a6ad68152ce32d363ebc] <==
	I1016 19:45:34.435665       1 server_linux.go:53] "Using iptables proxy"
	I1016 19:45:34.540764       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 19:45:34.641798       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 19:45:34.641838       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1016 19:45:34.641912       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 19:45:34.929678       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 19:45:34.929803       1 server_linux.go:132] "Using iptables Proxier"
	I1016 19:45:34.945497       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 19:45:34.962725       1 server.go:527] "Version info" version="v1.34.1"
	I1016 19:45:34.963472       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:45:34.967987       1 config.go:200] "Starting service config controller"
	I1016 19:45:34.968067       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 19:45:34.968112       1 config.go:106] "Starting endpoint slice config controller"
	I1016 19:45:34.968155       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 19:45:34.968193       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 19:45:34.968226       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 19:45:34.978999       1 config.go:309] "Starting node config controller"
	I1016 19:45:34.979017       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 19:45:34.979024       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 19:45:35.076025       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 19:45:35.096801       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 19:45:35.098425       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e4f3e3fd9a25fd6f2aeca07c188cfe599751fb591689a318653e360958e27cf5] <==
	I1016 19:45:31.791024       1 serving.go:386] Generated self-signed cert in-memory
	I1016 19:45:34.141952       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1016 19:45:34.141994       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:45:34.153325       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1016 19:45:34.153374       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1016 19:45:34.153402       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:45:34.153409       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:45:34.153425       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:45:34.153438       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:45:34.153917       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 19:45:34.154065       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 19:45:34.253533       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:45:34.253567       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:45:34.253542       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: E1016 19:45:33.351997     725 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-408495\" not found" node="newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: I1016 19:45:33.694512     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: I1016 19:45:33.830809     725 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: I1016 19:45:33.830923     725 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: I1016 19:45:33.830953     725 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: I1016 19:45:33.832116     725 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: E1016 19:45:33.869607     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-408495\" already exists" pod="kube-system/kube-scheduler-newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: I1016 19:45:33.869642     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: E1016 19:45:33.880576     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-408495\" already exists" pod="kube-system/etcd-newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: I1016 19:45:33.880618     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: E1016 19:45:33.903593     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-408495\" already exists" pod="kube-system/kube-apiserver-newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: I1016 19:45:33.903626     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: E1016 19:45:33.943870     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-408495\" already exists" pod="kube-system/kube-controller-manager-newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: I1016 19:45:33.975252     725 apiserver.go:52] "Watching apiserver"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: I1016 19:45:33.992772     725 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 16 19:45:34 newest-cni-408495 kubelet[725]: I1016 19:45:34.020732     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd2f50b1-a314-43cb-a543-15ab3396db7e-xtables-lock\") pod \"kube-proxy-lh68f\" (UID: \"cd2f50b1-a314-43cb-a543-15ab3396db7e\") " pod="kube-system/kube-proxy-lh68f"
	Oct 16 19:45:34 newest-cni-408495 kubelet[725]: I1016 19:45:34.020790     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd2f50b1-a314-43cb-a543-15ab3396db7e-lib-modules\") pod \"kube-proxy-lh68f\" (UID: \"cd2f50b1-a314-43cb-a543-15ab3396db7e\") " pod="kube-system/kube-proxy-lh68f"
	Oct 16 19:45:34 newest-cni-408495 kubelet[725]: I1016 19:45:34.020808     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02d047e5-f3d9-4ab8-8c5d-70f6efb82f39-xtables-lock\") pod \"kindnet-9sr6p\" (UID: \"02d047e5-f3d9-4ab8-8c5d-70f6efb82f39\") " pod="kube-system/kindnet-9sr6p"
	Oct 16 19:45:34 newest-cni-408495 kubelet[725]: I1016 19:45:34.020846     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/02d047e5-f3d9-4ab8-8c5d-70f6efb82f39-cni-cfg\") pod \"kindnet-9sr6p\" (UID: \"02d047e5-f3d9-4ab8-8c5d-70f6efb82f39\") " pod="kube-system/kindnet-9sr6p"
	Oct 16 19:45:34 newest-cni-408495 kubelet[725]: I1016 19:45:34.020866     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02d047e5-f3d9-4ab8-8c5d-70f6efb82f39-lib-modules\") pod \"kindnet-9sr6p\" (UID: \"02d047e5-f3d9-4ab8-8c5d-70f6efb82f39\") " pod="kube-system/kindnet-9sr6p"
	Oct 16 19:45:34 newest-cni-408495 kubelet[725]: I1016 19:45:34.068362     725 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 16 19:45:34 newest-cni-408495 kubelet[725]: W1016 19:45:34.304750     725 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84/crio-0ba0fd8f6bcfc5fcab79ce4a44d9e5f12ccf3747af87362f8e57d9f56181bced WatchSource:0}: Error finding container 0ba0fd8f6bcfc5fcab79ce4a44d9e5f12ccf3747af87362f8e57d9f56181bced: Status 404 returned error can't find the container with id 0ba0fd8f6bcfc5fcab79ce4a44d9e5f12ccf3747af87362f8e57d9f56181bced
	Oct 16 19:45:36 newest-cni-408495 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 19:45:36 newest-cni-408495 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 19:45:36 newest-cni-408495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-408495 -n newest-cni-408495
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-408495 -n newest-cni-408495: exit status 2 (346.533099ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-408495 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-wd562 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5cbvm kubernetes-dashboard-855c9754f9-fs5nl
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-408495 describe pod coredns-66bc5c9577-wd562 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5cbvm kubernetes-dashboard-855c9754f9-fs5nl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-408495 describe pod coredns-66bc5c9577-wd562 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5cbvm kubernetes-dashboard-855c9754f9-fs5nl: exit status 1 (94.16936ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-wd562" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-5cbvm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-fs5nl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-408495 describe pod coredns-66bc5c9577-wd562 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5cbvm kubernetes-dashboard-855c9754f9-fs5nl: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-408495
helpers_test.go:243: (dbg) docker inspect newest-cni-408495:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84",
	        "Created": "2025-10-16T19:44:41.200270265Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 495038,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T19:45:20.80945185Z",
	            "FinishedAt": "2025-10-16T19:45:19.868409158Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84/hosts",
	        "LogPath": "/var/lib/docker/containers/fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84/fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84-json.log",
	        "Name": "/newest-cni-408495",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-408495:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-408495",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84",
	                "LowerDir": "/var/lib/docker/overlay2/a62320e2d2184bb8592ab3447890777471b3d5ecc07825c30e50a8feaf660a01-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a62320e2d2184bb8592ab3447890777471b3d5ecc07825c30e50a8feaf660a01/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a62320e2d2184bb8592ab3447890777471b3d5ecc07825c30e50a8feaf660a01/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a62320e2d2184bb8592ab3447890777471b3d5ecc07825c30e50a8feaf660a01/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-408495",
	                "Source": "/var/lib/docker/volumes/newest-cni-408495/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-408495",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-408495",
	                "name.minikube.sigs.k8s.io": "newest-cni-408495",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb884fe29ed1e13aaaad3c740a2fb242896f930092f85f03faa4d019cbd702c0",
	            "SandboxKey": "/var/run/docker/netns/fb884fe29ed1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-408495": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:54:06:3e:1e:86",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f3e824b0b22d0962642ad84d54a8f1c5049220ee34215d539c66435401df6a38",
	                    "EndpointID": "a773b50ff80b5dd3826437832570c762c8a9c8c00888a60568f97c4a8817afb0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-408495",
	                        "fc99bb32a05a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-408495 -n newest-cni-408495
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-408495 -n newest-cni-408495: exit status 2 (370.117339ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-408495 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-408495 logs -n 25: (1.244008033s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p no-preload-225696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:42 UTC │                     │
	│ stop    │ -p no-preload-225696 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:42 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable dashboard -p no-preload-225696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ start   │ -p no-preload-225696 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-751669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │                     │
	│ stop    │ -p embed-certs-751669 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-751669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ start   │ -p embed-certs-751669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:44 UTC │
	│ image   │ no-preload-225696 image list --format=json                                                                                                                                                                                                    │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ pause   │ -p no-preload-225696 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	│ delete  │ -p no-preload-225696                                                                                                                                                                                                                          │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p no-preload-225696                                                                                                                                                                                                                          │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p disable-driver-mounts-031282                                                                                                                                                                                                               │ disable-driver-mounts-031282 │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ start   │ -p default-k8s-diff-port-850436 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	│ image   │ embed-certs-751669 image list --format=json                                                                                                                                                                                                   │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ pause   │ -p embed-certs-751669 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	│ delete  │ -p embed-certs-751669                                                                                                                                                                                                                         │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p embed-certs-751669                                                                                                                                                                                                                         │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ start   │ -p newest-cni-408495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:45 UTC │
	│ addons  │ enable metrics-server -p newest-cni-408495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │                     │
	│ stop    │ -p newest-cni-408495 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ addons  │ enable dashboard -p newest-cni-408495 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ start   │ -p newest-cni-408495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ image   │ newest-cni-408495 image list --format=json                                                                                                                                                                                                    │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ pause   │ -p newest-cni-408495 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 19:45:20
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 19:45:20.526184  494907 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:45:20.526394  494907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:45:20.526424  494907 out.go:374] Setting ErrFile to fd 2...
	I1016 19:45:20.526444  494907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:45:20.526726  494907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:45:20.527181  494907 out.go:368] Setting JSON to false
	I1016 19:45:20.528285  494907 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8850,"bootTime":1760635071,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 19:45:20.528384  494907 start.go:141] virtualization:  
	I1016 19:45:20.531339  494907 out.go:179] * [newest-cni-408495] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 19:45:20.535213  494907 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 19:45:20.535294  494907 notify.go:220] Checking for updates...
	I1016 19:45:20.541250  494907 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 19:45:20.544214  494907 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:45:20.547042  494907 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 19:45:20.550017  494907 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 19:45:20.552978  494907 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 19:45:20.556647  494907 config.go:182] Loaded profile config "newest-cni-408495": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:45:20.557527  494907 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 19:45:20.597597  494907 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 19:45:20.597746  494907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:45:20.661299  494907 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-16 19:45:20.646086094 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:45:20.661404  494907 docker.go:318] overlay module found
	I1016 19:45:20.664682  494907 out.go:179] * Using the docker driver based on existing profile
	I1016 19:45:20.667553  494907 start.go:305] selected driver: docker
	I1016 19:45:20.667575  494907 start.go:925] validating driver "docker" against &{Name:newest-cni-408495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-408495 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:45:20.667693  494907 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 19:45:20.668402  494907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:45:20.725951  494907 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-16 19:45:20.716051227 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:45:20.726291  494907 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1016 19:45:20.726329  494907 cni.go:84] Creating CNI manager for ""
	I1016 19:45:20.726391  494907 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:45:20.726431  494907 start.go:349] cluster config:
	{Name:newest-cni-408495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-408495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:45:20.729669  494907 out.go:179] * Starting "newest-cni-408495" primary control-plane node in "newest-cni-408495" cluster
	I1016 19:45:20.732511  494907 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 19:45:20.735483  494907 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 19:45:20.738320  494907 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:45:20.738390  494907 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 19:45:20.738403  494907 cache.go:58] Caching tarball of preloaded images
	I1016 19:45:20.738413  494907 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 19:45:20.738500  494907 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 19:45:20.738511  494907 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 19:45:20.738632  494907 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/config.json ...
	I1016 19:45:20.758071  494907 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 19:45:20.758092  494907 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 19:45:20.758112  494907 cache.go:232] Successfully downloaded all kic artifacts
	I1016 19:45:20.758134  494907 start.go:360] acquireMachinesLock for newest-cni-408495: {Name:mk4f5bcb30afe2773f49aca4b6c534db2867d41f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:45:20.758191  494907 start.go:364] duration metric: took 39.336µs to acquireMachinesLock for "newest-cni-408495"
	I1016 19:45:20.758210  494907 start.go:96] Skipping create...Using existing machine configuration
	I1016 19:45:20.758216  494907 fix.go:54] fixHost starting: 
	I1016 19:45:20.758471  494907 cli_runner.go:164] Run: docker container inspect newest-cni-408495 --format={{.State.Status}}
	I1016 19:45:20.775271  494907 fix.go:112] recreateIfNeeded on newest-cni-408495: state=Stopped err=<nil>
	W1016 19:45:20.775298  494907 fix.go:138] unexpected machine state, will restart: <nil>
	W1016 19:45:21.433889  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	W1016 19:45:23.933570  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	I1016 19:45:20.778528  494907 out.go:252] * Restarting existing docker container for "newest-cni-408495" ...
	I1016 19:45:20.778611  494907 cli_runner.go:164] Run: docker start newest-cni-408495
	I1016 19:45:21.044201  494907 cli_runner.go:164] Run: docker container inspect newest-cni-408495 --format={{.State.Status}}
	I1016 19:45:21.069105  494907 kic.go:430] container "newest-cni-408495" state is running.
	I1016 19:45:21.069834  494907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-408495
	I1016 19:45:21.097296  494907 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/config.json ...
	I1016 19:45:21.097683  494907 machine.go:93] provisionDockerMachine start ...
	I1016 19:45:21.097813  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:21.122260  494907 main.go:141] libmachine: Using SSH client type: native
	I1016 19:45:21.122582  494907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1016 19:45:21.122593  494907 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 19:45:21.123261  494907 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 19:45:24.272845  494907 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-408495
	
	I1016 19:45:24.272875  494907 ubuntu.go:182] provisioning hostname "newest-cni-408495"
	I1016 19:45:24.272939  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:24.292955  494907 main.go:141] libmachine: Using SSH client type: native
	I1016 19:45:24.293343  494907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1016 19:45:24.293365  494907 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-408495 && echo "newest-cni-408495" | sudo tee /etc/hostname
	I1016 19:45:24.451367  494907 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-408495
	
	I1016 19:45:24.451450  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:24.469618  494907 main.go:141] libmachine: Using SSH client type: native
	I1016 19:45:24.469933  494907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1016 19:45:24.469959  494907 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-408495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-408495/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-408495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 19:45:24.617434  494907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 19:45:24.617462  494907 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 19:45:24.617482  494907 ubuntu.go:190] setting up certificates
	I1016 19:45:24.617495  494907 provision.go:84] configureAuth start
	I1016 19:45:24.617563  494907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-408495
	I1016 19:45:24.635718  494907 provision.go:143] copyHostCerts
	I1016 19:45:24.635790  494907 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 19:45:24.635814  494907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 19:45:24.635898  494907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 19:45:24.636007  494907 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 19:45:24.636018  494907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 19:45:24.636045  494907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 19:45:24.636111  494907 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 19:45:24.636120  494907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 19:45:24.636144  494907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 19:45:24.636198  494907 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.newest-cni-408495 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-408495]
	I1016 19:45:25.376985  494907 provision.go:177] copyRemoteCerts
	I1016 19:45:25.377068  494907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 19:45:25.377111  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:25.395831  494907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:45:25.496955  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 19:45:25.515722  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1016 19:45:25.534032  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 19:45:25.552169  494907 provision.go:87] duration metric: took 934.659269ms to configureAuth
	I1016 19:45:25.552197  494907 ubuntu.go:206] setting minikube options for container-runtime
	I1016 19:45:25.552400  494907 config.go:182] Loaded profile config "newest-cni-408495": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:45:25.552508  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:25.570945  494907 main.go:141] libmachine: Using SSH client type: native
	I1016 19:45:25.571255  494907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1016 19:45:25.571276  494907 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 19:45:25.870532  494907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 19:45:25.870555  494907 machine.go:96] duration metric: took 4.772858672s to provisionDockerMachine
	I1016 19:45:25.870566  494907 start.go:293] postStartSetup for "newest-cni-408495" (driver="docker")
	I1016 19:45:25.870576  494907 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 19:45:25.870637  494907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 19:45:25.870676  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:25.888389  494907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:45:25.992960  494907 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 19:45:25.996754  494907 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 19:45:25.996785  494907 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 19:45:25.996798  494907 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 19:45:25.996855  494907 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 19:45:25.996945  494907 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 19:45:25.997059  494907 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 19:45:26.004677  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:45:26.028244  494907 start.go:296] duration metric: took 157.645403ms for postStartSetup
	I1016 19:45:26.028334  494907 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 19:45:26.028375  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:26.046401  494907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:45:26.146435  494907 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 19:45:26.151632  494907 fix.go:56] duration metric: took 5.393408657s for fixHost
	I1016 19:45:26.151658  494907 start.go:83] releasing machines lock for "newest-cni-408495", held for 5.393458733s
	I1016 19:45:26.151728  494907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-408495
	I1016 19:45:26.169405  494907 ssh_runner.go:195] Run: cat /version.json
	I1016 19:45:26.169435  494907 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 19:45:26.169464  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:26.169556  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:26.190497  494907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:45:26.192195  494907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:45:26.292712  494907 ssh_runner.go:195] Run: systemctl --version
	I1016 19:45:26.383112  494907 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 19:45:26.418674  494907 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 19:45:26.423067  494907 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 19:45:26.423138  494907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 19:45:26.430962  494907 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 19:45:26.430988  494907 start.go:495] detecting cgroup driver to use...
	I1016 19:45:26.431021  494907 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 19:45:26.431076  494907 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 19:45:26.449023  494907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 19:45:26.462080  494907 docker.go:218] disabling cri-docker service (if available) ...
	I1016 19:45:26.462184  494907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 19:45:26.478102  494907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 19:45:26.491046  494907 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 19:45:26.607329  494907 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 19:45:26.726347  494907 docker.go:234] disabling docker service ...
	I1016 19:45:26.726423  494907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 19:45:26.742156  494907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 19:45:26.756440  494907 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 19:45:26.890745  494907 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 19:45:27.018119  494907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 19:45:27.031781  494907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 19:45:27.045792  494907 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 19:45:27.045891  494907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:45:27.055234  494907 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 19:45:27.055361  494907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:45:27.063816  494907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:45:27.072422  494907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:45:27.082001  494907 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 19:45:27.090802  494907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:45:27.099836  494907 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:45:27.108376  494907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:45:27.117098  494907 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 19:45:27.124646  494907 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 19:45:27.132115  494907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:45:27.249213  494907 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 19:45:27.392790  494907 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 19:45:27.392934  494907 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 19:45:27.396912  494907 start.go:563] Will wait 60s for crictl version
	I1016 19:45:27.397032  494907 ssh_runner.go:195] Run: which crictl
	I1016 19:45:27.400875  494907 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 19:45:27.427381  494907 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 19:45:27.427572  494907 ssh_runner.go:195] Run: crio --version
	I1016 19:45:27.461537  494907 ssh_runner.go:195] Run: crio --version
	I1016 19:45:27.498777  494907 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 19:45:27.501656  494907 cli_runner.go:164] Run: docker network inspect newest-cni-408495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:45:27.518543  494907 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1016 19:45:27.522667  494907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:45:27.535298  494907 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1016 19:45:27.538110  494907 kubeadm.go:883] updating cluster {Name:newest-cni-408495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-408495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 19:45:27.538261  494907 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:45:27.538363  494907 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:45:27.581010  494907 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:45:27.581035  494907 crio.go:433] Images already preloaded, skipping extraction
	I1016 19:45:27.581098  494907 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:45:27.611013  494907 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:45:27.611036  494907 cache_images.go:85] Images are preloaded, skipping loading
	I1016 19:45:27.611043  494907 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1016 19:45:27.611142  494907 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-408495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-408495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 19:45:27.611227  494907 ssh_runner.go:195] Run: crio config
	I1016 19:45:27.674792  494907 cni.go:84] Creating CNI manager for ""
	I1016 19:45:27.674816  494907 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:45:27.674839  494907 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1016 19:45:27.674883  494907 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-408495 NodeName:newest-cni-408495 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 19:45:27.675051  494907 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-408495"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 19:45:27.675126  494907 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 19:45:27.682946  494907 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 19:45:27.683038  494907 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 19:45:27.690553  494907 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1016 19:45:27.703005  494907 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 19:45:27.715512  494907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1016 19:45:27.728071  494907 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1016 19:45:27.731454  494907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:45:27.740825  494907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:45:27.855473  494907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:45:27.870318  494907 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495 for IP: 192.168.85.2
	I1016 19:45:27.870393  494907 certs.go:195] generating shared ca certs ...
	I1016 19:45:27.870423  494907 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:45:27.870614  494907 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 19:45:27.870695  494907 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 19:45:27.870717  494907 certs.go:257] generating profile certs ...
	I1016 19:45:27.870937  494907 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/client.key
	I1016 19:45:27.871069  494907 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/apiserver.key.3eb76944
	I1016 19:45:27.871149  494907 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/proxy-client.key
	I1016 19:45:27.871301  494907 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 19:45:27.871379  494907 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 19:45:27.871404  494907 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 19:45:27.871468  494907 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 19:45:27.871521  494907 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 19:45:27.871568  494907 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 19:45:27.871645  494907 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:45:27.872507  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 19:45:27.897565  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 19:45:27.917289  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 19:45:27.941172  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 19:45:27.963061  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1016 19:45:27.987133  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 19:45:28.016002  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 19:45:28.045909  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/newest-cni-408495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1016 19:45:28.075608  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 19:45:28.100196  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 19:45:28.119801  494907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 19:45:28.144484  494907 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 19:45:28.158105  494907 ssh_runner.go:195] Run: openssl version
	I1016 19:45:28.164694  494907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 19:45:28.173831  494907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:45:28.177555  494907 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:45:28.177622  494907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:45:28.219121  494907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 19:45:28.226980  494907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 19:45:28.235374  494907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 19:45:28.239075  494907 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 19:45:28.239139  494907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 19:45:28.280053  494907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 19:45:28.288735  494907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 19:45:28.297195  494907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 19:45:28.300795  494907 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 19:45:28.300860  494907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 19:45:28.342231  494907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 19:45:28.350021  494907 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 19:45:28.353834  494907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 19:45:28.395999  494907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 19:45:28.437584  494907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 19:45:28.482287  494907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 19:45:28.540182  494907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 19:45:28.610990  494907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 19:45:28.714420  494907 kubeadm.go:400] StartCluster: {Name:newest-cni-408495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-408495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:45:28.714533  494907 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 19:45:28.714649  494907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 19:45:28.775050  494907 cri.go:89] found id: "e4f3e3fd9a25fd6f2aeca07c188cfe599751fb591689a318653e360958e27cf5"
	I1016 19:45:28.775086  494907 cri.go:89] found id: "2b96988c62b19f605ebea6bc4b48cd7579b71a62b59f9d6d042e6bd8a3b8bb2e"
	I1016 19:45:28.775092  494907 cri.go:89] found id: "74f48fee211f7b365a3bee8a063b590d0eea60c3639cde2f3e7f1bd036d8f440"
	I1016 19:45:28.775096  494907 cri.go:89] found id: "7b7239d3b6dbc021205aef879390811244649e59d88ccb4c88a903b9ced2779b"
	I1016 19:45:28.775100  494907 cri.go:89] found id: ""
	I1016 19:45:28.775182  494907 ssh_runner.go:195] Run: sudo runc list -f json
	W1016 19:45:28.789928  494907 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:45:28Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:45:28.790069  494907 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 19:45:28.805916  494907 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 19:45:28.805989  494907 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 19:45:28.806330  494907 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 19:45:28.820916  494907 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 19:45:28.821677  494907 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-408495" does not appear in /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:45:28.822015  494907 kubeconfig.go:62] /home/jenkins/minikube-integration/21738-288457/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-408495" cluster setting kubeconfig missing "newest-cni-408495" context setting]
	I1016 19:45:28.822663  494907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:45:28.824633  494907 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 19:45:28.832653  494907 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1016 19:45:28.832727  494907 kubeadm.go:601] duration metric: took 26.716941ms to restartPrimaryControlPlane
	I1016 19:45:28.832752  494907 kubeadm.go:402] duration metric: took 118.341901ms to StartCluster
	I1016 19:45:28.832781  494907 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:45:28.832860  494907 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:45:28.833830  494907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:45:28.834098  494907 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:45:28.834348  494907 config.go:182] Loaded profile config "newest-cni-408495": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:45:28.834296  494907 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 19:45:28.834523  494907 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-408495"
	I1016 19:45:28.834537  494907 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-408495"
	W1016 19:45:28.834543  494907 addons.go:247] addon storage-provisioner should already be in state true
	I1016 19:45:28.834566  494907 host.go:66] Checking if "newest-cni-408495" exists ...
	I1016 19:45:28.835029  494907 cli_runner.go:164] Run: docker container inspect newest-cni-408495 --format={{.State.Status}}
	I1016 19:45:28.835489  494907 addons.go:69] Setting default-storageclass=true in profile "newest-cni-408495"
	I1016 19:45:28.835522  494907 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-408495"
	I1016 19:45:28.835586  494907 addons.go:69] Setting dashboard=true in profile "newest-cni-408495"
	I1016 19:45:28.835602  494907 addons.go:238] Setting addon dashboard=true in "newest-cni-408495"
	W1016 19:45:28.835608  494907 addons.go:247] addon dashboard should already be in state true
	I1016 19:45:28.835635  494907 host.go:66] Checking if "newest-cni-408495" exists ...
	I1016 19:45:28.835818  494907 cli_runner.go:164] Run: docker container inspect newest-cni-408495 --format={{.State.Status}}
	I1016 19:45:28.836036  494907 cli_runner.go:164] Run: docker container inspect newest-cni-408495 --format={{.State.Status}}
	I1016 19:45:28.839074  494907 out.go:179] * Verifying Kubernetes components...
	I1016 19:45:28.847359  494907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:45:28.885052  494907 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 19:45:28.890516  494907 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:45:28.890538  494907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 19:45:28.890606  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:28.900449  494907 addons.go:238] Setting addon default-storageclass=true in "newest-cni-408495"
	W1016 19:45:28.900477  494907 addons.go:247] addon default-storageclass should already be in state true
	I1016 19:45:28.900525  494907 host.go:66] Checking if "newest-cni-408495" exists ...
	I1016 19:45:28.900955  494907 cli_runner.go:164] Run: docker container inspect newest-cni-408495 --format={{.State.Status}}
	I1016 19:45:28.904042  494907 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1016 19:45:28.912807  494907 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1016 19:45:25.933676  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	W1016 19:45:27.934292  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	I1016 19:45:28.917201  494907 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1016 19:45:28.917227  494907 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1016 19:45:28.917300  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:28.941223  494907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:45:28.941350  494907 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 19:45:28.941364  494907 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 19:45:28.941419  494907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-408495
	I1016 19:45:28.973679  494907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:45:28.980898  494907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/newest-cni-408495/id_rsa Username:docker}
	I1016 19:45:29.159442  494907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:45:29.205352  494907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:45:29.211364  494907 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1016 19:45:29.211436  494907 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1016 19:45:29.257405  494907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 19:45:29.280655  494907 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1016 19:45:29.280730  494907 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1016 19:45:29.342397  494907 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1016 19:45:29.342466  494907 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1016 19:45:29.402856  494907 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1016 19:45:29.402926  494907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1016 19:45:29.458295  494907 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1016 19:45:29.458369  494907 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1016 19:45:29.486967  494907 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1016 19:45:29.487043  494907 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1016 19:45:29.511012  494907 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1016 19:45:29.511091  494907 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1016 19:45:29.530341  494907 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1016 19:45:29.530413  494907 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1016 19:45:29.553346  494907 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1016 19:45:29.553426  494907 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1016 19:45:29.576562  494907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1016 19:45:29.936552  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	W1016 19:45:32.433276  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	W1016 19:45:34.433810  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	I1016 19:45:35.060463  494907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.900947898s)
	I1016 19:45:35.060523  494907 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.855100696s)
	I1016 19:45:35.060556  494907 api_server.go:52] waiting for apiserver process to appear ...
	I1016 19:45:35.060616  494907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 19:45:35.060695  494907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.803230214s)
	I1016 19:45:35.061114  494907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.484478717s)
	I1016 19:45:35.064096  494907 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-408495 addons enable metrics-server
	
	I1016 19:45:35.085121  494907 api_server.go:72] duration metric: took 6.25073476s to wait for apiserver process to appear ...
	I1016 19:45:35.085176  494907 api_server.go:88] waiting for apiserver healthz status ...
	I1016 19:45:35.085204  494907 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:45:35.095274  494907 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1016 19:45:35.096898  494907 api_server.go:141] control plane version: v1.34.1
	I1016 19:45:35.096979  494907 api_server.go:131] duration metric: took 11.794638ms to wait for apiserver health ...
	I1016 19:45:35.097016  494907 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 19:45:35.101500  494907 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1016 19:45:35.104629  494907 addons.go:514] duration metric: took 6.27033569s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1016 19:45:35.106197  494907 system_pods.go:59] 8 kube-system pods found
	I1016 19:45:35.106231  494907 system_pods.go:61] "coredns-66bc5c9577-wd562" [7e3e6903-1b13-40d0-91ee-345356eedde4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1016 19:45:35.106241  494907 system_pods.go:61] "etcd-newest-cni-408495" [0e5cebea-13bb-4784-9247-5a021cc3b89d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 19:45:35.106251  494907 system_pods.go:61] "kindnet-9sr6p" [02d047e5-f3d9-4ab8-8c5d-70f6efb82f39] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1016 19:45:35.106258  494907 system_pods.go:61] "kube-apiserver-newest-cni-408495" [a564226f-8d4d-4f8a-8129-116f7fde1dad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 19:45:35.106264  494907 system_pods.go:61] "kube-controller-manager-newest-cni-408495" [134b6611-b670-44be-9bdf-a2258c3c7bed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 19:45:35.106271  494907 system_pods.go:61] "kube-proxy-lh68f" [cd2f50b1-a314-43cb-a543-15ab3396db7e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1016 19:45:35.106278  494907 system_pods.go:61] "kube-scheduler-newest-cni-408495" [7955ac6a-cda9-4c86-a5e4-990606dfbb0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 19:45:35.106283  494907 system_pods.go:61] "storage-provisioner" [af091ec8-8f1b-458e-916f-2232da7ac31a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1016 19:45:35.106290  494907 system_pods.go:74] duration metric: took 9.241952ms to wait for pod list to return data ...
	I1016 19:45:35.106299  494907 default_sa.go:34] waiting for default service account to be created ...
	I1016 19:45:35.110168  494907 default_sa.go:45] found service account: "default"
	I1016 19:45:35.110197  494907 default_sa.go:55] duration metric: took 3.88505ms for default service account to be created ...
	I1016 19:45:35.110211  494907 kubeadm.go:586] duration metric: took 6.275830743s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1016 19:45:35.110229  494907 node_conditions.go:102] verifying NodePressure condition ...
	I1016 19:45:35.123008  494907 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 19:45:35.123097  494907 node_conditions.go:123] node cpu capacity is 2
	I1016 19:45:35.123126  494907 node_conditions.go:105] duration metric: took 12.889772ms to run NodePressure ...
	I1016 19:45:35.123173  494907 start.go:241] waiting for startup goroutines ...
	I1016 19:45:35.123201  494907 start.go:246] waiting for cluster config update ...
	I1016 19:45:35.123231  494907 start.go:255] writing updated cluster config ...
	I1016 19:45:35.123602  494907 ssh_runner.go:195] Run: rm -f paused
	I1016 19:45:35.239781  494907 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1016 19:45:35.243009  494907 out.go:179] * Done! kubectl is now configured to use "newest-cni-408495" cluster and "default" namespace by default
	W1016 19:45:36.933688  488039 node_ready.go:57] node "default-k8s-diff-port-850436" has "Ready":"False" status (will retry)
	I1016 19:45:37.933526  488039 node_ready.go:49] node "default-k8s-diff-port-850436" is "Ready"
	I1016 19:45:37.933549  488039 node_ready.go:38] duration metric: took 40.003196258s for node "default-k8s-diff-port-850436" to be "Ready" ...
	I1016 19:45:37.933561  488039 api_server.go:52] waiting for apiserver process to appear ...
	I1016 19:45:37.933618  488039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 19:45:37.947997  488039 api_server.go:72] duration metric: took 42.148061607s to wait for apiserver process to appear ...
	I1016 19:45:37.948017  488039 api_server.go:88] waiting for apiserver healthz status ...
	I1016 19:45:37.948037  488039 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1016 19:45:37.960874  488039 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1016 19:45:37.962556  488039 api_server.go:141] control plane version: v1.34.1
	I1016 19:45:37.962614  488039 api_server.go:131] duration metric: took 14.589697ms to wait for apiserver health ...
	I1016 19:45:37.962638  488039 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 19:45:37.974082  488039 system_pods.go:59] 8 kube-system pods found
	I1016 19:45:37.974120  488039 system_pods.go:61] "coredns-66bc5c9577-vnm65" [448486e9-ec0e-40c3-b106-5199d6090906] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:45:37.974127  488039 system_pods.go:61] "etcd-default-k8s-diff-port-850436" [239f4f2b-4e12-47a6-83bb-86b0144b67fa] Running
	I1016 19:45:37.974132  488039 system_pods.go:61] "kindnet-x85fg" [d4767810-daa5-4517-ba09-8bf6504516b2] Running
	I1016 19:45:37.974137  488039 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850436" [58577b33-3ea0-4618-b42e-afadd777a45c] Running
	I1016 19:45:37.974141  488039 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850436" [458d5d16-d6bc-4b97-94cc-0305f13a95a5] Running
	I1016 19:45:37.974145  488039 system_pods.go:61] "kube-proxy-2l5ck" [fb08d80e-eae2-4cfe-adec-7dff53b69338] Running
	I1016 19:45:37.974150  488039 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850436" [45fc8dad-2ab6-46df-b7f3-e4508cd3fc2b] Running
	I1016 19:45:37.974156  488039 system_pods.go:61] "storage-provisioner" [4d591848-c88d-48c6-9cb8-6c660c47d3c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:45:37.974169  488039 system_pods.go:74] duration metric: took 11.512526ms to wait for pod list to return data ...
	I1016 19:45:37.974186  488039 default_sa.go:34] waiting for default service account to be created ...
	I1016 19:45:37.977335  488039 default_sa.go:45] found service account: "default"
	I1016 19:45:37.977354  488039 default_sa.go:55] duration metric: took 3.161472ms for default service account to be created ...
	I1016 19:45:37.977363  488039 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 19:45:37.992267  488039 system_pods.go:86] 8 kube-system pods found
	I1016 19:45:37.992308  488039 system_pods.go:89] "coredns-66bc5c9577-vnm65" [448486e9-ec0e-40c3-b106-5199d6090906] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:45:37.992318  488039 system_pods.go:89] "etcd-default-k8s-diff-port-850436" [239f4f2b-4e12-47a6-83bb-86b0144b67fa] Running
	I1016 19:45:37.992324  488039 system_pods.go:89] "kindnet-x85fg" [d4767810-daa5-4517-ba09-8bf6504516b2] Running
	I1016 19:45:37.992329  488039 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-850436" [58577b33-3ea0-4618-b42e-afadd777a45c] Running
	I1016 19:45:37.992333  488039 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-850436" [458d5d16-d6bc-4b97-94cc-0305f13a95a5] Running
	I1016 19:45:37.992347  488039 system_pods.go:89] "kube-proxy-2l5ck" [fb08d80e-eae2-4cfe-adec-7dff53b69338] Running
	I1016 19:45:37.992352  488039 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-850436" [45fc8dad-2ab6-46df-b7f3-e4508cd3fc2b] Running
	I1016 19:45:37.992358  488039 system_pods.go:89] "storage-provisioner" [4d591848-c88d-48c6-9cb8-6c660c47d3c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:45:37.992384  488039 retry.go:31] will retry after 213.494824ms: missing components: kube-dns
	I1016 19:45:38.212423  488039 system_pods.go:86] 8 kube-system pods found
	I1016 19:45:38.212462  488039 system_pods.go:89] "coredns-66bc5c9577-vnm65" [448486e9-ec0e-40c3-b106-5199d6090906] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:45:38.212480  488039 system_pods.go:89] "etcd-default-k8s-diff-port-850436" [239f4f2b-4e12-47a6-83bb-86b0144b67fa] Running
	I1016 19:45:38.212494  488039 system_pods.go:89] "kindnet-x85fg" [d4767810-daa5-4517-ba09-8bf6504516b2] Running
	I1016 19:45:38.212499  488039 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-850436" [58577b33-3ea0-4618-b42e-afadd777a45c] Running
	I1016 19:45:38.212504  488039 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-850436" [458d5d16-d6bc-4b97-94cc-0305f13a95a5] Running
	I1016 19:45:38.212508  488039 system_pods.go:89] "kube-proxy-2l5ck" [fb08d80e-eae2-4cfe-adec-7dff53b69338] Running
	I1016 19:45:38.212512  488039 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-850436" [45fc8dad-2ab6-46df-b7f3-e4508cd3fc2b] Running
	I1016 19:45:38.212518  488039 system_pods.go:89] "storage-provisioner" [4d591848-c88d-48c6-9cb8-6c660c47d3c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:45:38.212543  488039 retry.go:31] will retry after 236.589234ms: missing components: kube-dns
	I1016 19:45:38.455113  488039 system_pods.go:86] 8 kube-system pods found
	I1016 19:45:38.455148  488039 system_pods.go:89] "coredns-66bc5c9577-vnm65" [448486e9-ec0e-40c3-b106-5199d6090906] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:45:38.455156  488039 system_pods.go:89] "etcd-default-k8s-diff-port-850436" [239f4f2b-4e12-47a6-83bb-86b0144b67fa] Running
	I1016 19:45:38.455162  488039 system_pods.go:89] "kindnet-x85fg" [d4767810-daa5-4517-ba09-8bf6504516b2] Running
	I1016 19:45:38.455167  488039 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-850436" [58577b33-3ea0-4618-b42e-afadd777a45c] Running
	I1016 19:45:38.455172  488039 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-850436" [458d5d16-d6bc-4b97-94cc-0305f13a95a5] Running
	I1016 19:45:38.455176  488039 system_pods.go:89] "kube-proxy-2l5ck" [fb08d80e-eae2-4cfe-adec-7dff53b69338] Running
	I1016 19:45:38.455180  488039 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-850436" [45fc8dad-2ab6-46df-b7f3-e4508cd3fc2b] Running
	I1016 19:45:38.455185  488039 system_pods.go:89] "storage-provisioner" [4d591848-c88d-48c6-9cb8-6c660c47d3c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:45:38.455199  488039 retry.go:31] will retry after 356.055934ms: missing components: kube-dns
	I1016 19:45:38.814580  488039 system_pods.go:86] 8 kube-system pods found
	I1016 19:45:38.814619  488039 system_pods.go:89] "coredns-66bc5c9577-vnm65" [448486e9-ec0e-40c3-b106-5199d6090906] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:45:38.814627  488039 system_pods.go:89] "etcd-default-k8s-diff-port-850436" [239f4f2b-4e12-47a6-83bb-86b0144b67fa] Running
	I1016 19:45:38.814633  488039 system_pods.go:89] "kindnet-x85fg" [d4767810-daa5-4517-ba09-8bf6504516b2] Running
	I1016 19:45:38.814638  488039 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-850436" [58577b33-3ea0-4618-b42e-afadd777a45c] Running
	I1016 19:45:38.814642  488039 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-850436" [458d5d16-d6bc-4b97-94cc-0305f13a95a5] Running
	I1016 19:45:38.814649  488039 system_pods.go:89] "kube-proxy-2l5ck" [fb08d80e-eae2-4cfe-adec-7dff53b69338] Running
	I1016 19:45:38.814662  488039 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-850436" [45fc8dad-2ab6-46df-b7f3-e4508cd3fc2b] Running
	I1016 19:45:38.814668  488039 system_pods.go:89] "storage-provisioner" [4d591848-c88d-48c6-9cb8-6c660c47d3c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:45:38.814684  488039 retry.go:31] will retry after 589.04012ms: missing components: kube-dns
	I1016 19:45:39.407877  488039 system_pods.go:86] 8 kube-system pods found
	I1016 19:45:39.407910  488039 system_pods.go:89] "coredns-66bc5c9577-vnm65" [448486e9-ec0e-40c3-b106-5199d6090906] Running
	I1016 19:45:39.407918  488039 system_pods.go:89] "etcd-default-k8s-diff-port-850436" [239f4f2b-4e12-47a6-83bb-86b0144b67fa] Running
	I1016 19:45:39.407924  488039 system_pods.go:89] "kindnet-x85fg" [d4767810-daa5-4517-ba09-8bf6504516b2] Running
	I1016 19:45:39.407929  488039 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-850436" [58577b33-3ea0-4618-b42e-afadd777a45c] Running
	I1016 19:45:39.407935  488039 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-850436" [458d5d16-d6bc-4b97-94cc-0305f13a95a5] Running
	I1016 19:45:39.407939  488039 system_pods.go:89] "kube-proxy-2l5ck" [fb08d80e-eae2-4cfe-adec-7dff53b69338] Running
	I1016 19:45:39.407954  488039 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-850436" [45fc8dad-2ab6-46df-b7f3-e4508cd3fc2b] Running
	I1016 19:45:39.407965  488039 system_pods.go:89] "storage-provisioner" [4d591848-c88d-48c6-9cb8-6c660c47d3c6] Running
	I1016 19:45:39.407972  488039 system_pods.go:126] duration metric: took 1.430603791s to wait for k8s-apps to be running ...
	I1016 19:45:39.407980  488039 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 19:45:39.408036  488039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:45:39.426384  488039 system_svc.go:56] duration metric: took 18.393557ms WaitForService to wait for kubelet
	I1016 19:45:39.426415  488039 kubeadm.go:586] duration metric: took 43.626484293s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:45:39.426435  488039 node_conditions.go:102] verifying NodePressure condition ...
	I1016 19:45:39.429487  488039 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 19:45:39.429524  488039 node_conditions.go:123] node cpu capacity is 2
	I1016 19:45:39.429539  488039 node_conditions.go:105] duration metric: took 3.097062ms to run NodePressure ...
	I1016 19:45:39.429553  488039 start.go:241] waiting for startup goroutines ...
	I1016 19:45:39.429561  488039 start.go:246] waiting for cluster config update ...
	I1016 19:45:39.429575  488039 start.go:255] writing updated cluster config ...
	I1016 19:45:39.429869  488039 ssh_runner.go:195] Run: rm -f paused
	I1016 19:45:39.437735  488039 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:45:39.441165  488039 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vnm65" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:45:39.445845  488039 pod_ready.go:94] pod "coredns-66bc5c9577-vnm65" is "Ready"
	I1016 19:45:39.445872  488039 pod_ready.go:86] duration metric: took 4.675434ms for pod "coredns-66bc5c9577-vnm65" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:45:39.448158  488039 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:45:39.457232  488039 pod_ready.go:94] pod "etcd-default-k8s-diff-port-850436" is "Ready"
	I1016 19:45:39.457272  488039 pod_ready.go:86] duration metric: took 9.090287ms for pod "etcd-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:45:39.459588  488039 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:45:39.464251  488039 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-850436" is "Ready"
	I1016 19:45:39.464285  488039 pod_ready.go:86] duration metric: took 4.66252ms for pod "kube-apiserver-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:45:39.466513  488039 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:45:39.841811  488039 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-850436" is "Ready"
	I1016 19:45:39.841843  488039 pod_ready.go:86] duration metric: took 375.305269ms for pod "kube-controller-manager-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> CRI-O <==
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.284296194Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.293383173Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=40d99df0-323e-47f1-8d4d-106017f178b8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.295851222Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-lh68f/POD" id=2e238bd8-ae00-47fa-8ddc-7c25cbe927c5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.295936844Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.29985089Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=2e238bd8-ae00-47fa-8ddc-7c25cbe927c5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.302308043Z" level=info msg="Ran pod sandbox f71613153d16647ffe8f433bf34eef00acac5296cc75f9f056c3dac4ab92bda0 with infra container: kube-system/kindnet-9sr6p/POD" id=40d99df0-323e-47f1-8d4d-106017f178b8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.305893323Z" level=info msg="Ran pod sandbox 0ba0fd8f6bcfc5fcab79ce4a44d9e5f12ccf3747af87362f8e57d9f56181bced with infra container: kube-system/kube-proxy-lh68f/POD" id=2e238bd8-ae00-47fa-8ddc-7c25cbe927c5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.309641927Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3b5a9740-8cdc-4bda-85f3-82519b6a024e name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.311198384Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=eae6b123-afd1-4ed6-876b-97acbaa342f4 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.3125621Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5e8f495e-c896-4ed5-8e54-a01bbb1e330c name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.312953941Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=ac0405e6-5f6a-43d7-ab8b-c47e1f971a25 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.314393301Z" level=info msg="Creating container: kube-system/kindnet-9sr6p/kindnet-cni" id=341566df-4f8b-4766-b6ce-1e40dab65c40 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.314721469Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.315702042Z" level=info msg="Creating container: kube-system/kube-proxy-lh68f/kube-proxy" id=66841ab3-acf1-4857-9a6f-d9f92be5532f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.318021502Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.328794146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.337793224Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.340224144Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.340855749Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.373960882Z" level=info msg="Created container 0cc8f8b2d746b739934597db57cb073031ee1ef32eb1a6ad68152ce32d363ebc: kube-system/kube-proxy-lh68f/kube-proxy" id=66841ab3-acf1-4857-9a6f-d9f92be5532f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.374825368Z" level=info msg="Starting container: 0cc8f8b2d746b739934597db57cb073031ee1ef32eb1a6ad68152ce32d363ebc" id=a6ecfc01-76c7-44b7-9332-c7e4ab690603 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.377586704Z" level=info msg="Created container ef8a9127987ec8ec7bb5370fdad4100dd60ca99aa9f188e79a6a27cd8b18e4da: kube-system/kindnet-9sr6p/kindnet-cni" id=341566df-4f8b-4766-b6ce-1e40dab65c40 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.385774457Z" level=info msg="Starting container: ef8a9127987ec8ec7bb5370fdad4100dd60ca99aa9f188e79a6a27cd8b18e4da" id=29b8e0a9-41ed-4808-bfbe-7cd9836aa3ca name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.386431343Z" level=info msg="Started container" PID=1059 containerID=0cc8f8b2d746b739934597db57cb073031ee1ef32eb1a6ad68152ce32d363ebc description=kube-system/kube-proxy-lh68f/kube-proxy id=a6ecfc01-76c7-44b7-9332-c7e4ab690603 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0ba0fd8f6bcfc5fcab79ce4a44d9e5f12ccf3747af87362f8e57d9f56181bced
	Oct 16 19:45:34 newest-cni-408495 crio[610]: time="2025-10-16T19:45:34.389472485Z" level=info msg="Started container" PID=1058 containerID=ef8a9127987ec8ec7bb5370fdad4100dd60ca99aa9f188e79a6a27cd8b18e4da description=kube-system/kindnet-9sr6p/kindnet-cni id=29b8e0a9-41ed-4808-bfbe-7cd9836aa3ca name=/runtime.v1.RuntimeService/StartContainer sandboxID=f71613153d16647ffe8f433bf34eef00acac5296cc75f9f056c3dac4ab92bda0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ef8a9127987ec       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   f71613153d166       kindnet-9sr6p                               kube-system
	0cc8f8b2d746b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 seconds ago       Running             kube-proxy                1                   0ba0fd8f6bcfc       kube-proxy-lh68f                            kube-system
	e4f3e3fd9a25f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   12 seconds ago      Running             kube-scheduler            1                   d643fc2f578c1       kube-scheduler-newest-cni-408495            kube-system
	2b96988c62b19       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   12 seconds ago      Running             etcd                      1                   449fcb2b9b5d3       etcd-newest-cni-408495                      kube-system
	74f48fee211f7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   12 seconds ago      Running             kube-controller-manager   1                   ee4f62598e572       kube-controller-manager-newest-cni-408495   kube-system
	7b7239d3b6dbc       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   12 seconds ago      Running             kube-apiserver            1                   f57168011e4e8       kube-apiserver-newest-cni-408495            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-408495
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-408495
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=newest-cni-408495
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T19_45_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 19:45:06 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-408495
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:45:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:45:33 +0000   Thu, 16 Oct 2025 19:45:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:45:33 +0000   Thu, 16 Oct 2025 19:45:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:45:33 +0000   Thu, 16 Oct 2025 19:45:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 16 Oct 2025 19:45:33 +0000   Thu, 16 Oct 2025 19:45:02 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-408495
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                2b847ec6-788c-498e-9669-d3802c2dcb5e
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-408495                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-9sr6p                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-408495             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-408495    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-lh68f                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-408495             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  40s (x8 over 40s)  kubelet          Node newest-cni-408495 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    40s (x8 over 40s)  kubelet          Node newest-cni-408495 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     40s (x8 over 40s)  kubelet          Node newest-cni-408495 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node newest-cni-408495 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node newest-cni-408495 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     32s                kubelet          Node newest-cni-408495 status is now: NodeHasSufficientPID
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           28s                node-controller  Node newest-cni-408495 event: Registered Node newest-cni-408495 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node newest-cni-408495 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node newest-cni-408495 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13s (x8 over 13s)  kubelet          Node newest-cni-408495 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-408495 event: Registered Node newest-cni-408495 in Controller
	
	
	==> dmesg <==
	[Oct16 19:22] overlayfs: idmapped layers are currently not supported
	[  +5.025487] overlayfs: idmapped layers are currently not supported
	[Oct16 19:23] overlayfs: idmapped layers are currently not supported
	[ +28.397927] overlayfs: idmapped layers are currently not supported
	[Oct16 19:24] overlayfs: idmapped layers are currently not supported
	[ +25.533019] overlayfs: idmapped layers are currently not supported
	[Oct16 19:26] overlayfs: idmapped layers are currently not supported
	[Oct16 19:27] overlayfs: idmapped layers are currently not supported
	[Oct16 19:29] overlayfs: idmapped layers are currently not supported
	[Oct16 19:31] overlayfs: idmapped layers are currently not supported
	[Oct16 19:32] overlayfs: idmapped layers are currently not supported
	[Oct16 19:34] overlayfs: idmapped layers are currently not supported
	[Oct16 19:36] overlayfs: idmapped layers are currently not supported
	[Oct16 19:37] overlayfs: idmapped layers are currently not supported
	[  +8.490329] overlayfs: idmapped layers are currently not supported
	[Oct16 19:38] overlayfs: idmapped layers are currently not supported
	[Oct16 19:39] overlayfs: idmapped layers are currently not supported
	[Oct16 19:40] overlayfs: idmapped layers are currently not supported
	[Oct16 19:41] overlayfs: idmapped layers are currently not supported
	[ +20.605853] overlayfs: idmapped layers are currently not supported
	[Oct16 19:43] overlayfs: idmapped layers are currently not supported
	[ +20.110477] overlayfs: idmapped layers are currently not supported
	[Oct16 19:44] overlayfs: idmapped layers are currently not supported
	[Oct16 19:45] overlayfs: idmapped layers are currently not supported
	[ +26.426905] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2b96988c62b19f605ebea6bc4b48cd7579b71a62b59f9d6d042e6bd8a3b8bb2e] <==
	{"level":"warn","ts":"2025-10-16T19:45:32.174759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.192808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.218459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.235911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.256296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.281229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.318758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.324455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.342112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.357399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.373850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.397950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.418347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.433640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.453062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.475251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.496692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.513456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.530332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.551358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.568898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.598713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.641935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.654948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:45:32.775705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35460","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:45:41 up  2:27,  0 user,  load average: 3.52, 3.49, 3.01
	Linux newest-cni-408495 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ef8a9127987ec8ec7bb5370fdad4100dd60ca99aa9f188e79a6a27cd8b18e4da] <==
	I1016 19:45:34.417815       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 19:45:34.421491       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1016 19:45:34.421628       1 main.go:148] setting mtu 1500 for CNI 
	I1016 19:45:34.421640       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 19:45:34.421652       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T19:45:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 19:45:34.625824       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 19:45:34.625872       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 19:45:34.625881       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 19:45:34.626581       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [7b7239d3b6dbc021205aef879390811244649e59d88ccb4c88a903b9ced2779b] <==
	I1016 19:45:33.778123       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1016 19:45:33.778390       1 aggregator.go:171] initial CRD sync complete...
	I1016 19:45:33.778402       1 autoregister_controller.go:144] Starting autoregister controller
	I1016 19:45:33.778410       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 19:45:33.778418       1 cache.go:39] Caches are synced for autoregister controller
	I1016 19:45:33.797777       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 19:45:33.801283       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1016 19:45:33.803542       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1016 19:45:33.810104       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1016 19:45:33.810141       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1016 19:45:33.816968       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1016 19:45:33.817065       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 19:45:33.817264       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1016 19:45:33.839915       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 19:45:34.037589       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 19:45:34.520485       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 19:45:34.703020       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 19:45:34.763922       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 19:45:34.816163       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 19:45:34.849177       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 19:45:34.947366       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.59.217"}
	I1016 19:45:34.970434       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.163.155"}
	I1016 19:45:36.976602       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 19:45:37.325915       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 19:45:37.525872       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [74f48fee211f7b365a3bee8a063b590d0eea60c3639cde2f3e7f1bd036d8f440] <==
	I1016 19:45:36.974048       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 19:45:36.976134       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:45:36.976226       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1016 19:45:36.982139       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:45:36.982221       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 19:45:36.982252       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 19:45:36.982364       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1016 19:45:36.983781       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1016 19:45:36.989371       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1016 19:45:36.990612       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1016 19:45:37.007666       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:45:37.007949       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1016 19:45:37.020209       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1016 19:45:37.021258       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1016 19:45:37.021372       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 19:45:37.021621       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1016 19:45:37.021802       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1016 19:45:37.021888       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1016 19:45:37.022186       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 19:45:37.022382       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1016 19:45:37.023789       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-408495"
	I1016 19:45:37.023921       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1016 19:45:37.028502       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1016 19:45:37.029226       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:45:37.029430       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-proxy [0cc8f8b2d746b739934597db57cb073031ee1ef32eb1a6ad68152ce32d363ebc] <==
	I1016 19:45:34.435665       1 server_linux.go:53] "Using iptables proxy"
	I1016 19:45:34.540764       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 19:45:34.641798       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 19:45:34.641838       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1016 19:45:34.641912       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 19:45:34.929678       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 19:45:34.929803       1 server_linux.go:132] "Using iptables Proxier"
	I1016 19:45:34.945497       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 19:45:34.962725       1 server.go:527] "Version info" version="v1.34.1"
	I1016 19:45:34.963472       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:45:34.967987       1 config.go:200] "Starting service config controller"
	I1016 19:45:34.968067       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 19:45:34.968112       1 config.go:106] "Starting endpoint slice config controller"
	I1016 19:45:34.968155       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 19:45:34.968193       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 19:45:34.968226       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 19:45:34.978999       1 config.go:309] "Starting node config controller"
	I1016 19:45:34.979017       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 19:45:34.979024       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 19:45:35.076025       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 19:45:35.096801       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 19:45:35.098425       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e4f3e3fd9a25fd6f2aeca07c188cfe599751fb591689a318653e360958e27cf5] <==
	I1016 19:45:31.791024       1 serving.go:386] Generated self-signed cert in-memory
	I1016 19:45:34.141952       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1016 19:45:34.141994       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:45:34.153325       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1016 19:45:34.153374       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1016 19:45:34.153402       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:45:34.153409       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:45:34.153425       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:45:34.153438       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:45:34.153917       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 19:45:34.154065       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 19:45:34.253533       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:45:34.253567       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 19:45:34.253542       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: E1016 19:45:33.351997     725 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-408495\" not found" node="newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: I1016 19:45:33.694512     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: I1016 19:45:33.830809     725 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: I1016 19:45:33.830923     725 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: I1016 19:45:33.830953     725 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: I1016 19:45:33.832116     725 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: E1016 19:45:33.869607     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-408495\" already exists" pod="kube-system/kube-scheduler-newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: I1016 19:45:33.869642     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: E1016 19:45:33.880576     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-408495\" already exists" pod="kube-system/etcd-newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: I1016 19:45:33.880618     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: E1016 19:45:33.903593     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-408495\" already exists" pod="kube-system/kube-apiserver-newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: I1016 19:45:33.903626     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: E1016 19:45:33.943870     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-408495\" already exists" pod="kube-system/kube-controller-manager-newest-cni-408495"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: I1016 19:45:33.975252     725 apiserver.go:52] "Watching apiserver"
	Oct 16 19:45:33 newest-cni-408495 kubelet[725]: I1016 19:45:33.992772     725 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 16 19:45:34 newest-cni-408495 kubelet[725]: I1016 19:45:34.020732     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd2f50b1-a314-43cb-a543-15ab3396db7e-xtables-lock\") pod \"kube-proxy-lh68f\" (UID: \"cd2f50b1-a314-43cb-a543-15ab3396db7e\") " pod="kube-system/kube-proxy-lh68f"
	Oct 16 19:45:34 newest-cni-408495 kubelet[725]: I1016 19:45:34.020790     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd2f50b1-a314-43cb-a543-15ab3396db7e-lib-modules\") pod \"kube-proxy-lh68f\" (UID: \"cd2f50b1-a314-43cb-a543-15ab3396db7e\") " pod="kube-system/kube-proxy-lh68f"
	Oct 16 19:45:34 newest-cni-408495 kubelet[725]: I1016 19:45:34.020808     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02d047e5-f3d9-4ab8-8c5d-70f6efb82f39-xtables-lock\") pod \"kindnet-9sr6p\" (UID: \"02d047e5-f3d9-4ab8-8c5d-70f6efb82f39\") " pod="kube-system/kindnet-9sr6p"
	Oct 16 19:45:34 newest-cni-408495 kubelet[725]: I1016 19:45:34.020846     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/02d047e5-f3d9-4ab8-8c5d-70f6efb82f39-cni-cfg\") pod \"kindnet-9sr6p\" (UID: \"02d047e5-f3d9-4ab8-8c5d-70f6efb82f39\") " pod="kube-system/kindnet-9sr6p"
	Oct 16 19:45:34 newest-cni-408495 kubelet[725]: I1016 19:45:34.020866     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02d047e5-f3d9-4ab8-8c5d-70f6efb82f39-lib-modules\") pod \"kindnet-9sr6p\" (UID: \"02d047e5-f3d9-4ab8-8c5d-70f6efb82f39\") " pod="kube-system/kindnet-9sr6p"
	Oct 16 19:45:34 newest-cni-408495 kubelet[725]: I1016 19:45:34.068362     725 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 16 19:45:34 newest-cni-408495 kubelet[725]: W1016 19:45:34.304750     725 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fc99bb32a05a4ff2efe58724434f888751651299191d58bda95715301ca74e84/crio-0ba0fd8f6bcfc5fcab79ce4a44d9e5f12ccf3747af87362f8e57d9f56181bced WatchSource:0}: Error finding container 0ba0fd8f6bcfc5fcab79ce4a44d9e5f12ccf3747af87362f8e57d9f56181bced: Status 404 returned error can't find the container with id 0ba0fd8f6bcfc5fcab79ce4a44d9e5f12ccf3747af87362f8e57d9f56181bced
	Oct 16 19:45:36 newest-cni-408495 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 19:45:36 newest-cni-408495 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 19:45:36 newest-cni-408495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-408495 -n newest-cni-408495
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-408495 -n newest-cni-408495: exit status 2 (387.321956ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-408495 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-wd562 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5cbvm kubernetes-dashboard-855c9754f9-fs5nl
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-408495 describe pod coredns-66bc5c9577-wd562 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5cbvm kubernetes-dashboard-855c9754f9-fs5nl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-408495 describe pod coredns-66bc5c9577-wd562 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5cbvm kubernetes-dashboard-855c9754f9-fs5nl: exit status 1 (102.700273ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-wd562" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-5cbvm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-fs5nl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-408495 describe pod coredns-66bc5c9577-wd562 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5cbvm kubernetes-dashboard-855c9754f9-fs5nl: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-850436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-850436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (401.677067ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:45:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-850436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-850436 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-850436 describe deploy/metrics-server -n kube-system: exit status 1 (216.784019ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-850436 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-850436
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-850436:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a",
	        "Created": "2025-10-16T19:44:20.385325839Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 488472,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T19:44:20.451124124Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a/hostname",
	        "HostsPath": "/var/lib/docker/containers/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a/hosts",
	        "LogPath": "/var/lib/docker/containers/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a-json.log",
	        "Name": "/default-k8s-diff-port-850436",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-850436:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-850436",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a",
	                "LowerDir": "/var/lib/docker/overlay2/704a7d346d8fb60187e66a824bc70cd63e48122ca5c9005a5543db75cf0cedf3-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/704a7d346d8fb60187e66a824bc70cd63e48122ca5c9005a5543db75cf0cedf3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/704a7d346d8fb60187e66a824bc70cd63e48122ca5c9005a5543db75cf0cedf3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/704a7d346d8fb60187e66a824bc70cd63e48122ca5c9005a5543db75cf0cedf3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-850436",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-850436/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-850436",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-850436",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-850436",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d42cb4049f9ecb07a486f3c1be14b0a87e2adbebcae7ca560291643efa64f99c",
	            "SandboxKey": "/var/run/docker/netns/d42cb4049f9e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-850436": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:5e:27:c6:cc:3a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "12c5ab8893cdac2531939d281a38b055f53ba9453adc3d59ffb5147c0257d0fe",
	                    "EndpointID": "d71b1e777e9fd691fa3dbc99f99341c27421fdc3e6a8e671573cec7f0ca925d7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-850436",
	                        "4aa7104008e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-850436 -n default-k8s-diff-port-850436
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-850436 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-850436 logs -n 25: (1.265745757s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-751669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │                     │
	│ stop    │ -p embed-certs-751669 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-751669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:43 UTC │
	│ start   │ -p embed-certs-751669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:43 UTC │ 16 Oct 25 19:44 UTC │
	│ image   │ no-preload-225696 image list --format=json                                                                                                                                                                                                    │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ pause   │ -p no-preload-225696 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	│ delete  │ -p no-preload-225696                                                                                                                                                                                                                          │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p no-preload-225696                                                                                                                                                                                                                          │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p disable-driver-mounts-031282                                                                                                                                                                                                               │ disable-driver-mounts-031282 │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ start   │ -p default-k8s-diff-port-850436 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:45 UTC │
	│ image   │ embed-certs-751669 image list --format=json                                                                                                                                                                                                   │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ pause   │ -p embed-certs-751669 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	│ delete  │ -p embed-certs-751669                                                                                                                                                                                                                         │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p embed-certs-751669                                                                                                                                                                                                                         │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ start   │ -p newest-cni-408495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:45 UTC │
	│ addons  │ enable metrics-server -p newest-cni-408495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │                     │
	│ stop    │ -p newest-cni-408495 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ addons  │ enable dashboard -p newest-cni-408495 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ start   │ -p newest-cni-408495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ image   │ newest-cni-408495 image list --format=json                                                                                                                                                                                                    │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ pause   │ -p newest-cni-408495 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │                     │
	│ delete  │ -p newest-cni-408495                                                                                                                                                                                                                          │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ delete  │ -p newest-cni-408495                                                                                                                                                                                                                          │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ start   │ -p auto-078761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-078761                  │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-850436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 19:45:44
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 19:45:44.826898  498106 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:45:44.827092  498106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:45:44.827103  498106 out.go:374] Setting ErrFile to fd 2...
	I1016 19:45:44.827109  498106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:45:44.827419  498106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:45:44.827878  498106 out.go:368] Setting JSON to false
	I1016 19:45:44.828858  498106 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8874,"bootTime":1760635071,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 19:45:44.828929  498106 start.go:141] virtualization:  
	I1016 19:45:44.833522  498106 out.go:179] * [auto-078761] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 19:45:44.838186  498106 notify.go:220] Checking for updates...
	I1016 19:45:44.842245  498106 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 19:45:44.845292  498106 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 19:45:44.848301  498106 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:45:44.851203  498106 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 19:45:44.854160  498106 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 19:45:44.857123  498106 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 19:45:44.860744  498106 config.go:182] Loaded profile config "default-k8s-diff-port-850436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:45:44.861012  498106 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 19:45:44.885243  498106 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 19:45:44.885375  498106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:45:44.942462  498106 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-16 19:45:44.932787046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:45:44.942570  498106 docker.go:318] overlay module found
	I1016 19:45:44.945960  498106 out.go:179] * Using the docker driver based on user configuration
	I1016 19:45:44.948863  498106 start.go:305] selected driver: docker
	I1016 19:45:44.948883  498106 start.go:925] validating driver "docker" against <nil>
	I1016 19:45:44.948896  498106 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 19:45:44.949835  498106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:45:45.017288  498106 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-16 19:45:44.998505775 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:45:45.017474  498106 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 19:45:45.017733  498106 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:45:45.030843  498106 out.go:179] * Using Docker driver with root privileges
	I1016 19:45:45.035631  498106 cni.go:84] Creating CNI manager for ""
	I1016 19:45:45.035725  498106 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:45:45.035737  498106 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1016 19:45:45.035835  498106 start.go:349] cluster config:
	{Name:auto-078761 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-078761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1016 19:45:45.059946  498106 out.go:179] * Starting "auto-078761" primary control-plane node in "auto-078761" cluster
	I1016 19:45:45.067801  498106 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 19:45:45.070978  498106 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 19:45:45.074020  498106 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:45:45.074105  498106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 19:45:45.074173  498106 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 19:45:45.074198  498106 cache.go:58] Caching tarball of preloaded images
	I1016 19:45:45.074327  498106 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 19:45:45.074341  498106 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 19:45:45.074485  498106 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/auto-078761/config.json ...
	I1016 19:45:45.074508  498106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/auto-078761/config.json: {Name:mk918c6618646147e71f49bbfd255eaa233a6edb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:45:45.103496  498106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 19:45:45.103530  498106 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 19:45:45.103547  498106 cache.go:232] Successfully downloaded all kic artifacts
	I1016 19:45:45.103572  498106 start.go:360] acquireMachinesLock for auto-078761: {Name:mk3125f23f98f3f32ef25bbc528264c31ef8a913 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:45:45.103691  498106 start.go:364] duration metric: took 102.442µs to acquireMachinesLock for "auto-078761"
	I1016 19:45:45.103726  498106 start.go:93] Provisioning new machine with config: &{Name:auto-078761 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-078761 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:45:45.103822  498106 start.go:125] createHost starting for "" (driver="docker")
	I1016 19:45:45.107876  498106 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1016 19:45:45.108197  498106 start.go:159] libmachine.API.Create for "auto-078761" (driver="docker")
	I1016 19:45:45.108272  498106 client.go:168] LocalClient.Create starting
	I1016 19:45:45.108387  498106 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem
	I1016 19:45:45.108434  498106 main.go:141] libmachine: Decoding PEM data...
	I1016 19:45:45.108457  498106 main.go:141] libmachine: Parsing certificate...
	I1016 19:45:45.108657  498106 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem
	I1016 19:45:45.108730  498106 main.go:141] libmachine: Decoding PEM data...
	I1016 19:45:45.108754  498106 main.go:141] libmachine: Parsing certificate...
	I1016 19:45:45.109306  498106 cli_runner.go:164] Run: docker network inspect auto-078761 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1016 19:45:45.131109  498106 cli_runner.go:211] docker network inspect auto-078761 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1016 19:45:45.131213  498106 network_create.go:284] running [docker network inspect auto-078761] to gather additional debugging logs...
	I1016 19:45:45.131240  498106 cli_runner.go:164] Run: docker network inspect auto-078761
	W1016 19:45:45.151306  498106 cli_runner.go:211] docker network inspect auto-078761 returned with exit code 1
	I1016 19:45:45.151354  498106 network_create.go:287] error running [docker network inspect auto-078761]: docker network inspect auto-078761: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-078761 not found
	I1016 19:45:45.151370  498106 network_create.go:289] output of [docker network inspect auto-078761]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-078761 not found
	
	** /stderr **
	I1016 19:45:45.151477  498106 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:45:45.183116  498106 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7adcf17f22ba IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:ab:9e:ea:f5:d5} reservation:<nil>}
	I1016 19:45:45.183586  498106 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbcb5241e782 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:58:26:d7:8f:45} reservation:<nil>}
	I1016 19:45:45.183891  498106 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-26579fafc836 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:48:af:83:92:ac} reservation:<nil>}
	I1016 19:45:45.184419  498106 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-12c5ab8893cd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3a:81:24:94:43:92} reservation:<nil>}
	I1016 19:45:45.184948  498106 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a4a760}
	I1016 19:45:45.185031  498106 network_create.go:124] attempt to create docker network auto-078761 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1016 19:45:45.185114  498106 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-078761 auto-078761
	I1016 19:45:45.279113  498106 network_create.go:108] docker network auto-078761 192.168.85.0/24 created
	I1016 19:45:45.279154  498106 kic.go:121] calculated static IP "192.168.85.2" for the "auto-078761" container
	I1016 19:45:45.279256  498106 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1016 19:45:45.303305  498106 cli_runner.go:164] Run: docker volume create auto-078761 --label name.minikube.sigs.k8s.io=auto-078761 --label created_by.minikube.sigs.k8s.io=true
	I1016 19:45:45.331653  498106 oci.go:103] Successfully created a docker volume auto-078761
	I1016 19:45:45.331786  498106 cli_runner.go:164] Run: docker run --rm --name auto-078761-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-078761 --entrypoint /usr/bin/test -v auto-078761:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1016 19:45:45.886479  498106 oci.go:107] Successfully prepared a docker volume auto-078761
	I1016 19:45:45.886535  498106 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:45:45.886556  498106 kic.go:194] Starting extracting preloaded images to volume ...
	I1016 19:45:45.886645  498106 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-078761:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 16 19:45:38 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:38.254189062Z" level=info msg="Created container c923c7eeb1c910b32fa24504800042f07880cdf2243ea7ca3223ccf80f5b1a09: kube-system/storage-provisioner/storage-provisioner" id=2fb4fd37-27d5-4e16-8c5b-0661455fb00f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:45:38 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:38.255233464Z" level=info msg="Starting container: c923c7eeb1c910b32fa24504800042f07880cdf2243ea7ca3223ccf80f5b1a09" id=458c37b4-1ade-47f3-b45a-4ef0a1526fbc name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:45:38 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:38.261015413Z" level=info msg="Started container" PID=1737 containerID=c923c7eeb1c910b32fa24504800042f07880cdf2243ea7ca3223ccf80f5b1a09 description=kube-system/storage-provisioner/storage-provisioner id=458c37b4-1ade-47f3-b45a-4ef0a1526fbc name=/runtime.v1.RuntimeService/StartContainer sandboxID=461a445dec89c9799ca3dff87725af40af0908053f8cda1a9a8ae722dbd00d95
	Oct 16 19:45:41 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:41.691909394Z" level=info msg="Running pod sandbox: default/busybox/POD" id=8f95c921-5065-4faf-b0f2-686cf24c6cef name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:45:41 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:41.691983619Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:41 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:41.700090264Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e35537491a07cc3230a2f9f1020da3c3675b317d85708c75c4c60b5f8c21f64c UID:a85c2c7f-3f8e-42da-8972-737f3f75d285 NetNS:/var/run/netns/bf04e15e-e09d-453e-8777-ac9e95a526ca Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000d2608}] Aliases:map[]}"
	Oct 16 19:45:41 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:41.700254417Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 16 19:45:41 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:41.712796748Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e35537491a07cc3230a2f9f1020da3c3675b317d85708c75c4c60b5f8c21f64c UID:a85c2c7f-3f8e-42da-8972-737f3f75d285 NetNS:/var/run/netns/bf04e15e-e09d-453e-8777-ac9e95a526ca Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000d2608}] Aliases:map[]}"
	Oct 16 19:45:41 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:41.713123792Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 16 19:45:41 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:41.722087169Z" level=info msg="Ran pod sandbox e35537491a07cc3230a2f9f1020da3c3675b317d85708c75c4c60b5f8c21f64c with infra container: default/busybox/POD" id=8f95c921-5065-4faf-b0f2-686cf24c6cef name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 19:45:41 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:41.723148998Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3a7331f5-ff32-4c77-9242-3e8de8e69e33 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:45:41 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:41.723322571Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3a7331f5-ff32-4c77-9242-3e8de8e69e33 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:45:41 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:41.723371803Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=3a7331f5-ff32-4c77-9242-3e8de8e69e33 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:45:41 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:41.724823241Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=80421262-dcad-4d15-a1fe-3050399b44e4 name=/runtime.v1.ImageService/PullImage
	Oct 16 19:45:41 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:41.728185372Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 16 19:45:43 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:43.809610011Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=80421262-dcad-4d15-a1fe-3050399b44e4 name=/runtime.v1.ImageService/PullImage
	Oct 16 19:45:43 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:43.810282421Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b0bf65f7-284f-4be0-bc3f-7ec7bbd26314 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:45:43 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:43.811777019Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=04e0df67-adce-49b2-921f-9ab49cc02ff8 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:45:43 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:43.817717838Z" level=info msg="Creating container: default/busybox/busybox" id=e47876b9-7dfc-4cd1-a225-7801bbc77a52 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:45:43 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:43.818498114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:43 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:43.823097174Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:43 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:43.823688385Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:45:43 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:43.844950028Z" level=info msg="Created container 3f5beae8f3733f1f96c1e7401bd14dcded323c38200b8ed0bf1eb50b99dbfd53: default/busybox/busybox" id=e47876b9-7dfc-4cd1-a225-7801bbc77a52 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:45:43 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:43.845971258Z" level=info msg="Starting container: 3f5beae8f3733f1f96c1e7401bd14dcded323c38200b8ed0bf1eb50b99dbfd53" id=1ef3e69e-b52d-4968-bf4e-d00fe695c2b6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:45:43 default-k8s-diff-port-850436 crio[835]: time="2025-10-16T19:45:43.847639314Z" level=info msg="Started container" PID=1794 containerID=3f5beae8f3733f1f96c1e7401bd14dcded323c38200b8ed0bf1eb50b99dbfd53 description=default/busybox/busybox id=1ef3e69e-b52d-4968-bf4e-d00fe695c2b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e35537491a07cc3230a2f9f1020da3c3675b317d85708c75c4c60b5f8c21f64c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	3f5beae8f3733       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   e35537491a07c       busybox                                                default
	c923c7eeb1c91       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   461a445dec89c       storage-provisioner                                    kube-system
	28151b933eb8d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   a841fad55a429       coredns-66bc5c9577-vnm65                               kube-system
	ec3493afd2f01       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   42fb44dbd2e51       kindnet-x85fg                                          kube-system
	f1a1fd0cbc17e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   49938d5650abd       kube-proxy-2l5ck                                       kube-system
	e3cf24d704222       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   c34891988c5e6       etcd-default-k8s-diff-port-850436                      kube-system
	97bebd144f6e9       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   0e18586692d79       kube-controller-manager-default-k8s-diff-port-850436   kube-system
	10f0c0aa556d5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   93300a2e66971       kube-scheduler-default-k8s-diff-port-850436            kube-system
	e70706762213b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   fe0af4b3c9dfb       kube-apiserver-default-k8s-diff-port-850436            kube-system
	
	
	==> coredns [28151b933eb8dd99cd478156a0c1095051ad54497c8771013189e827210b114e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58849 - 12940 "HINFO IN 9176327178378355052.2162686211231371847. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019964922s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-850436
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-850436
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=default-k8s-diff-port-850436
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T19_44_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 19:44:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-850436
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:45:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:45:52 +0000   Thu, 16 Oct 2025 19:44:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:45:52 +0000   Thu, 16 Oct 2025 19:44:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:45:52 +0000   Thu, 16 Oct 2025 19:44:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 19:45:52 +0000   Thu, 16 Oct 2025 19:45:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-850436
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                9d720de3-5d7a-422c-aff9-73121cba7d50
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-vnm65                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-default-k8s-diff-port-850436                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-x85fg                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-850436             250m (12%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-850436    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-2l5ck                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-default-k8s-diff-port-850436             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 55s                kube-proxy       
	  Warning  CgroupV1                 70s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)  kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)  kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)  kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s                kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s                kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s                kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node default-k8s-diff-port-850436 event: Registered Node default-k8s-diff-port-850436 in Controller
	  Normal   NodeReady                15s                kubelet          Node default-k8s-diff-port-850436 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct16 19:22] overlayfs: idmapped layers are currently not supported
	[  +5.025487] overlayfs: idmapped layers are currently not supported
	[Oct16 19:23] overlayfs: idmapped layers are currently not supported
	[ +28.397927] overlayfs: idmapped layers are currently not supported
	[Oct16 19:24] overlayfs: idmapped layers are currently not supported
	[ +25.533019] overlayfs: idmapped layers are currently not supported
	[Oct16 19:26] overlayfs: idmapped layers are currently not supported
	[Oct16 19:27] overlayfs: idmapped layers are currently not supported
	[Oct16 19:29] overlayfs: idmapped layers are currently not supported
	[Oct16 19:31] overlayfs: idmapped layers are currently not supported
	[Oct16 19:32] overlayfs: idmapped layers are currently not supported
	[Oct16 19:34] overlayfs: idmapped layers are currently not supported
	[Oct16 19:36] overlayfs: idmapped layers are currently not supported
	[Oct16 19:37] overlayfs: idmapped layers are currently not supported
	[  +8.490329] overlayfs: idmapped layers are currently not supported
	[Oct16 19:38] overlayfs: idmapped layers are currently not supported
	[Oct16 19:39] overlayfs: idmapped layers are currently not supported
	[Oct16 19:40] overlayfs: idmapped layers are currently not supported
	[Oct16 19:41] overlayfs: idmapped layers are currently not supported
	[ +20.605853] overlayfs: idmapped layers are currently not supported
	[Oct16 19:43] overlayfs: idmapped layers are currently not supported
	[ +20.110477] overlayfs: idmapped layers are currently not supported
	[Oct16 19:44] overlayfs: idmapped layers are currently not supported
	[Oct16 19:45] overlayfs: idmapped layers are currently not supported
	[ +26.426905] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e3cf24d70422269a86376d36a42e9c5adc715f5726e50e2d640da37584156635] <==
	{"level":"warn","ts":"2025-10-16T19:44:44.660342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:44.672305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:44.695819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:44.708260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:44.727873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:44.749904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:44.759739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:44.794145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:44.807006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:44.848181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:44.870624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:44.904266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:44.939315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:44.964240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:44.996770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:45.053020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:45.070094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:45.097948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:45.142011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:45.172206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:45.257492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:45.324419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:45.365940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:45.421996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:44:45.612693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54396","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:45:52 up  2:28,  0 user,  load average: 3.65, 3.52, 3.03
	Linux default-k8s-diff-port-850436 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ec3493afd2f0193e0f43e09f5260dc91d43335bb739d546723fa055ecdc9359f] <==
	I1016 19:44:57.160462       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 19:44:57.169534       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1016 19:44:57.169717       1 main.go:148] setting mtu 1500 for CNI 
	I1016 19:44:57.169757       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 19:44:57.169798       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T19:44:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 19:44:57.361444       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 19:44:57.361463       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 19:44:57.361472       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 19:44:57.409736       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1016 19:45:27.357233       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1016 19:45:27.362727       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1016 19:45:27.362820       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1016 19:45:27.410247       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1016 19:45:28.561784       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 19:45:28.561826       1 metrics.go:72] Registering metrics
	I1016 19:45:28.561889       1 controller.go:711] "Syncing nftables rules"
	I1016 19:45:37.362606       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:45:37.362656       1 main.go:301] handling current node
	I1016 19:45:47.358004       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:45:47.358042       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e70706762213bb18ff2020e9c9ad6586429d538f3e8494537e139daaf9c3d45e] <==
	I1016 19:44:47.387111       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1016 19:44:47.387536       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1016 19:44:47.390473       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1016 19:44:47.427371       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 19:44:47.427522       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1016 19:44:47.432560       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1016 19:44:47.592518       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 19:44:47.906739       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1016 19:44:47.918379       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1016 19:44:47.918404       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 19:44:49.085671       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 19:44:49.167929       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 19:44:49.258056       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1016 19:44:49.275845       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1016 19:44:49.276863       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 19:44:49.281518       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 19:44:50.104908       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 19:44:50.174644       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 19:44:50.219793       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1016 19:44:50.248058       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1016 19:44:55.837829       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1016 19:44:56.308422       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 19:44:56.434794       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 19:44:56.474281       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1016 19:45:50.600051       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:48164: use of closed network connection
	
	
	==> kube-controller-manager [97bebd144f6e953eee7e5e6437c310a7da17f1bea58334ac2670542e3fe49702] <==
	I1016 19:44:55.103529       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1016 19:44:55.103611       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1016 19:44:55.105534       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1016 19:44:55.105507       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1016 19:44:55.105851       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 19:44:55.105928       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 19:44:55.111684       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1016 19:44:55.111933       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1016 19:44:55.114522       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1016 19:44:55.118813       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-850436" podCIDRs=["10.244.0.0/24"]
	I1016 19:44:55.119090       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1016 19:44:55.129904       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1016 19:44:55.155040       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 19:44:55.156189       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1016 19:44:55.163406       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1016 19:44:55.163794       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1016 19:44:55.170611       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:44:55.170661       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1016 19:44:55.187596       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:44:55.187658       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:44:55.196225       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1016 19:44:55.223991       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:44:55.224017       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 19:44:55.224025       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 19:45:40.115803       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f1a1fd0cbc17e3a74fbb215d2fe44234110b0c6f2e8bdcb29aeef15c4adfa5a6] <==
	I1016 19:44:57.071598       1 server_linux.go:53] "Using iptables proxy"
	I1016 19:44:57.174797       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 19:44:57.275323       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 19:44:57.275365       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1016 19:44:57.275436       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 19:44:57.407278       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 19:44:57.407433       1 server_linux.go:132] "Using iptables Proxier"
	I1016 19:44:57.470786       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 19:44:57.477763       1 server.go:527] "Version info" version="v1.34.1"
	I1016 19:44:57.477790       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:44:57.487796       1 config.go:200] "Starting service config controller"
	I1016 19:44:57.487818       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 19:44:57.487835       1 config.go:106] "Starting endpoint slice config controller"
	I1016 19:44:57.487839       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 19:44:57.487850       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 19:44:57.487853       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 19:44:57.489687       1 config.go:309] "Starting node config controller"
	I1016 19:44:57.489701       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 19:44:57.588602       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 19:44:57.588635       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 19:44:57.588673       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1016 19:44:57.597185       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [10f0c0aa556d55ff168ac2edc5f604c6ddb7224f5b66b0e67c2e7ad80228e111] <==
	I1016 19:44:48.416334       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:44:48.418770       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 19:44:48.418871       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:44:48.418899       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:44:48.418926       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1016 19:44:48.445368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 19:44:48.445583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 19:44:48.445698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 19:44:48.445816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 19:44:48.445903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 19:44:48.445940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 19:44:48.446063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 19:44:48.446123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 19:44:48.446173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 19:44:48.446213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 19:44:48.446183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 19:44:48.446268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 19:44:48.446324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 19:44:48.449431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1016 19:44:48.451384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 19:44:48.451467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 19:44:48.451640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 19:44:48.451868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 19:44:48.451984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1016 19:44:49.319603       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 19:44:51 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:44:51.900955    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-850436" podStartSLOduration=1.900937718 podStartE2EDuration="1.900937718s" podCreationTimestamp="2025-10-16 19:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:44:51.885995727 +0000 UTC m=+1.844532295" watchObservedRunningTime="2025-10-16 19:44:51.900937718 +0000 UTC m=+1.859474286"
	Oct 16 19:44:55 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:44:55.184760    1310 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 16 19:44:55 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:44:55.186902    1310 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 16 19:44:56 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:44:56.051444    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fb08d80e-eae2-4cfe-adec-7dff53b69338-kube-proxy\") pod \"kube-proxy-2l5ck\" (UID: \"fb08d80e-eae2-4cfe-adec-7dff53b69338\") " pod="kube-system/kube-proxy-2l5ck"
	Oct 16 19:44:56 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:44:56.051486    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb08d80e-eae2-4cfe-adec-7dff53b69338-xtables-lock\") pod \"kube-proxy-2l5ck\" (UID: \"fb08d80e-eae2-4cfe-adec-7dff53b69338\") " pod="kube-system/kube-proxy-2l5ck"
	Oct 16 19:44:56 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:44:56.051509    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb08d80e-eae2-4cfe-adec-7dff53b69338-lib-modules\") pod \"kube-proxy-2l5ck\" (UID: \"fb08d80e-eae2-4cfe-adec-7dff53b69338\") " pod="kube-system/kube-proxy-2l5ck"
	Oct 16 19:44:56 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:44:56.051530    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqzm8\" (UniqueName: \"kubernetes.io/projected/fb08d80e-eae2-4cfe-adec-7dff53b69338-kube-api-access-rqzm8\") pod \"kube-proxy-2l5ck\" (UID: \"fb08d80e-eae2-4cfe-adec-7dff53b69338\") " pod="kube-system/kube-proxy-2l5ck"
	Oct 16 19:44:56 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:44:56.437499    1310 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 16 19:44:56 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:44:56.563308    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4767810-daa5-4517-ba09-8bf6504516b2-lib-modules\") pod \"kindnet-x85fg\" (UID: \"d4767810-daa5-4517-ba09-8bf6504516b2\") " pod="kube-system/kindnet-x85fg"
	Oct 16 19:44:56 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:44:56.563423    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4767810-daa5-4517-ba09-8bf6504516b2-xtables-lock\") pod \"kindnet-x85fg\" (UID: \"d4767810-daa5-4517-ba09-8bf6504516b2\") " pod="kube-system/kindnet-x85fg"
	Oct 16 19:44:56 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:44:56.563452    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d4767810-daa5-4517-ba09-8bf6504516b2-cni-cfg\") pod \"kindnet-x85fg\" (UID: \"d4767810-daa5-4517-ba09-8bf6504516b2\") " pod="kube-system/kindnet-x85fg"
	Oct 16 19:44:56 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:44:56.563506    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlgr8\" (UniqueName: \"kubernetes.io/projected/d4767810-daa5-4517-ba09-8bf6504516b2-kube-api-access-nlgr8\") pod \"kindnet-x85fg\" (UID: \"d4767810-daa5-4517-ba09-8bf6504516b2\") " pod="kube-system/kindnet-x85fg"
	Oct 16 19:44:58 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:44:58.007195    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2l5ck" podStartSLOduration=3.007172525 podStartE2EDuration="3.007172525s" podCreationTimestamp="2025-10-16 19:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:44:57.019091996 +0000 UTC m=+6.977628572" watchObservedRunningTime="2025-10-16 19:44:58.007172525 +0000 UTC m=+7.965709101"
	Oct 16 19:44:59 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:44:59.315240    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-x85fg" podStartSLOduration=3.315211399 podStartE2EDuration="3.315211399s" podCreationTimestamp="2025-10-16 19:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:44:58.00825755 +0000 UTC m=+7.966794134" watchObservedRunningTime="2025-10-16 19:44:59.315211399 +0000 UTC m=+9.273747967"
	Oct 16 19:45:37 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:45:37.717115    1310 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 16 19:45:37 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:45:37.851809    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/448486e9-ec0e-40c3-b106-5199d6090906-config-volume\") pod \"coredns-66bc5c9577-vnm65\" (UID: \"448486e9-ec0e-40c3-b106-5199d6090906\") " pod="kube-system/coredns-66bc5c9577-vnm65"
	Oct 16 19:45:37 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:45:37.852057    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78jqq\" (UniqueName: \"kubernetes.io/projected/448486e9-ec0e-40c3-b106-5199d6090906-kube-api-access-78jqq\") pod \"coredns-66bc5c9577-vnm65\" (UID: \"448486e9-ec0e-40c3-b106-5199d6090906\") " pod="kube-system/coredns-66bc5c9577-vnm65"
	Oct 16 19:45:37 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:45:37.952687    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4d591848-c88d-48c6-9cb8-6c660c47d3c6-tmp\") pod \"storage-provisioner\" (UID: \"4d591848-c88d-48c6-9cb8-6c660c47d3c6\") " pod="kube-system/storage-provisioner"
	Oct 16 19:45:37 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:45:37.952878    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd2t6\" (UniqueName: \"kubernetes.io/projected/4d591848-c88d-48c6-9cb8-6c660c47d3c6-kube-api-access-zd2t6\") pod \"storage-provisioner\" (UID: \"4d591848-c88d-48c6-9cb8-6c660c47d3c6\") " pod="kube-system/storage-provisioner"
	Oct 16 19:45:38 default-k8s-diff-port-850436 kubelet[1310]: W1016 19:45:38.145777    1310 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a/crio-a841fad55a429ef37b9fa38e440fffb3b99335b0cc01be19f426c8f36b059b1f WatchSource:0}: Error finding container a841fad55a429ef37b9fa38e440fffb3b99335b0cc01be19f426c8f36b059b1f: Status 404 returned error can't find the container with id a841fad55a429ef37b9fa38e440fffb3b99335b0cc01be19f426c8f36b059b1f
	Oct 16 19:45:38 default-k8s-diff-port-850436 kubelet[1310]: W1016 19:45:38.165458    1310 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a/crio-461a445dec89c9799ca3dff87725af40af0908053f8cda1a9a8ae722dbd00d95 WatchSource:0}: Error finding container 461a445dec89c9799ca3dff87725af40af0908053f8cda1a9a8ae722dbd00d95: Status 404 returned error can't find the container with id 461a445dec89c9799ca3dff87725af40af0908053f8cda1a9a8ae722dbd00d95
	Oct 16 19:45:39 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:45:39.143208    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.14318698 podStartE2EDuration="42.14318698s" podCreationTimestamp="2025-10-16 19:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:45:39.123739405 +0000 UTC m=+49.082275972" watchObservedRunningTime="2025-10-16 19:45:39.14318698 +0000 UTC m=+49.101723556"
	Oct 16 19:45:41 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:45:41.381767    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vnm65" podStartSLOduration=45.381748088 podStartE2EDuration="45.381748088s" podCreationTimestamp="2025-10-16 19:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 19:45:39.144635792 +0000 UTC m=+49.103172368" watchObservedRunningTime="2025-10-16 19:45:41.381748088 +0000 UTC m=+51.340284656"
	Oct 16 19:45:41 default-k8s-diff-port-850436 kubelet[1310]: I1016 19:45:41.480900    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m7nt\" (UniqueName: \"kubernetes.io/projected/a85c2c7f-3f8e-42da-8972-737f3f75d285-kube-api-access-5m7nt\") pod \"busybox\" (UID: \"a85c2c7f-3f8e-42da-8972-737f3f75d285\") " pod="default/busybox"
	Oct 16 19:45:41 default-k8s-diff-port-850436 kubelet[1310]: W1016 19:45:41.719418    1310 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a/crio-e35537491a07cc3230a2f9f1020da3c3675b317d85708c75c4c60b5f8c21f64c WatchSource:0}: Error finding container e35537491a07cc3230a2f9f1020da3c3675b317d85708c75c4c60b5f8c21f64c: Status 404 returned error can't find the container with id e35537491a07cc3230a2f9f1020da3c3675b317d85708c75c4c60b5f8c21f64c
	
	
	==> storage-provisioner [c923c7eeb1c910b32fa24504800042f07880cdf2243ea7ca3223ccf80f5b1a09] <==
	I1016 19:45:38.292731       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 19:45:38.353204       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 19:45:38.353266       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1016 19:45:38.355744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:45:38.369765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 19:45:38.370122       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 19:45:38.370585       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"371d6569-f6ea-4eb0-a7cb-5543888dcf96", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-850436_68ae9cd7-6662-4631-a224-82b30c4f6923 became leader
	I1016 19:45:38.375169       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-850436_68ae9cd7-6662-4631-a224-82b30c4f6923!
	W1016 19:45:38.382862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:45:38.392861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 19:45:38.476009       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-850436_68ae9cd7-6662-4631-a224-82b30c4f6923!
	W1016 19:45:40.395548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:45:40.400460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:45:42.404588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:45:42.412498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:45:44.415473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:45:44.429051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:45:46.432433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:45:46.437382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:45:48.441517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:45:48.446615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:45:50.451073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:45:50.459899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:45:52.462988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:45:52.476339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-850436 -n default-k8s-diff-port-850436
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-850436 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-850436 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-850436 --alsologtostderr -v=1: exit status 80 (2.310237666s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-850436 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 19:47:12.690617  503657 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:47:12.690804  503657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:47:12.690818  503657 out.go:374] Setting ErrFile to fd 2...
	I1016 19:47:12.690824  503657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:47:12.691105  503657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:47:12.691818  503657 out.go:368] Setting JSON to false
	I1016 19:47:12.691844  503657 mustload.go:65] Loading cluster: default-k8s-diff-port-850436
	I1016 19:47:12.692307  503657 config.go:182] Loaded profile config "default-k8s-diff-port-850436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:47:12.692801  503657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:47:12.717182  503657 host.go:66] Checking if "default-k8s-diff-port-850436" exists ...
	I1016 19:47:12.717527  503657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:47:12.790467  503657 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-16 19:47:12.780169908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:47:12.791186  503657 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-850436 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1016 19:47:12.794900  503657 out.go:179] * Pausing node default-k8s-diff-port-850436 ... 
	I1016 19:47:12.798856  503657 host.go:66] Checking if "default-k8s-diff-port-850436" exists ...
	I1016 19:47:12.799226  503657 ssh_runner.go:195] Run: systemctl --version
	I1016 19:47:12.799271  503657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:47:12.827855  503657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:47:12.932616  503657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:47:12.955521  503657 pause.go:52] kubelet running: true
	I1016 19:47:12.955684  503657 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 19:47:13.272443  503657 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 19:47:13.272542  503657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 19:47:13.358088  503657 cri.go:89] found id: "a3d43c810802a012980bb607f3ee226f9b47a963fa02f3bd528833fe420201ba"
	I1016 19:47:13.358161  503657 cri.go:89] found id: "b940324d4626152d5c0c25dda09100a15cd59317900c9d608332a078d8a55714"
	I1016 19:47:13.358181  503657 cri.go:89] found id: "8fbc3ea61b4840cffb138149604309a06a200993c1f68934c9f28f84215f43ca"
	I1016 19:47:13.358203  503657 cri.go:89] found id: "eef613b6fb796e0cad4b501a6d0821685cd8a7c54283320e4f50f4d158511a2a"
	I1016 19:47:13.358241  503657 cri.go:89] found id: "188edef414d15f9fcd0a85fa49e7243fbf77dab45649e305a2e60a979dedd27f"
	I1016 19:47:13.358267  503657 cri.go:89] found id: "2921ea52af99aa969071fb411fb52ba0f384fcc606004df4ff328bb7b0e640a5"
	I1016 19:47:13.358288  503657 cri.go:89] found id: "f415a5edf62f2fed33a35088647cc0f9936a583cf2985d885edf35900733bab2"
	I1016 19:47:13.358321  503657 cri.go:89] found id: "7a3b24f9c4c6aafecdde8d6b650ec0da77e3d7b5505503d38459f34464dc2a07"
	I1016 19:47:13.358345  503657 cri.go:89] found id: "a3f7185e8b7d30b96feaff04a980ad8d52b0865f5c6a2ae6f3ecc05241267bce"
	I1016 19:47:13.358377  503657 cri.go:89] found id: "b93060d3e7af49e77f76a0c238af703a0b5bd02650bbb1ff9d0a84489b5d595b"
	I1016 19:47:13.358408  503657 cri.go:89] found id: "6c012908584051b30602aa87822b512b418fdd18370e18b61ac73fdae4230834"
	I1016 19:47:13.358431  503657 cri.go:89] found id: ""
	I1016 19:47:13.358517  503657 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 19:47:13.370905  503657 retry.go:31] will retry after 280.059805ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:47:13Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:47:13.651441  503657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:47:13.668260  503657 pause.go:52] kubelet running: false
	I1016 19:47:13.668424  503657 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 19:47:13.937732  503657 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 19:47:13.937884  503657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 19:47:14.110622  503657 cri.go:89] found id: "a3d43c810802a012980bb607f3ee226f9b47a963fa02f3bd528833fe420201ba"
	I1016 19:47:14.110647  503657 cri.go:89] found id: "b940324d4626152d5c0c25dda09100a15cd59317900c9d608332a078d8a55714"
	I1016 19:47:14.110653  503657 cri.go:89] found id: "8fbc3ea61b4840cffb138149604309a06a200993c1f68934c9f28f84215f43ca"
	I1016 19:47:14.110656  503657 cri.go:89] found id: "eef613b6fb796e0cad4b501a6d0821685cd8a7c54283320e4f50f4d158511a2a"
	I1016 19:47:14.110660  503657 cri.go:89] found id: "188edef414d15f9fcd0a85fa49e7243fbf77dab45649e305a2e60a979dedd27f"
	I1016 19:47:14.110663  503657 cri.go:89] found id: "2921ea52af99aa969071fb411fb52ba0f384fcc606004df4ff328bb7b0e640a5"
	I1016 19:47:14.110666  503657 cri.go:89] found id: "f415a5edf62f2fed33a35088647cc0f9936a583cf2985d885edf35900733bab2"
	I1016 19:47:14.110670  503657 cri.go:89] found id: "7a3b24f9c4c6aafecdde8d6b650ec0da77e3d7b5505503d38459f34464dc2a07"
	I1016 19:47:14.110697  503657 cri.go:89] found id: "a3f7185e8b7d30b96feaff04a980ad8d52b0865f5c6a2ae6f3ecc05241267bce"
	I1016 19:47:14.110709  503657 cri.go:89] found id: "b93060d3e7af49e77f76a0c238af703a0b5bd02650bbb1ff9d0a84489b5d595b"
	I1016 19:47:14.110712  503657 cri.go:89] found id: "6c012908584051b30602aa87822b512b418fdd18370e18b61ac73fdae4230834"
	I1016 19:47:14.110716  503657 cri.go:89] found id: ""
	I1016 19:47:14.110773  503657 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 19:47:14.133328  503657 retry.go:31] will retry after 327.60402ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:47:14Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:47:14.461697  503657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:47:14.476275  503657 pause.go:52] kubelet running: false
	I1016 19:47:14.476392  503657 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 19:47:14.769297  503657 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 19:47:14.769447  503657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 19:47:14.899772  503657 cri.go:89] found id: "a3d43c810802a012980bb607f3ee226f9b47a963fa02f3bd528833fe420201ba"
	I1016 19:47:14.899849  503657 cri.go:89] found id: "b940324d4626152d5c0c25dda09100a15cd59317900c9d608332a078d8a55714"
	I1016 19:47:14.899868  503657 cri.go:89] found id: "8fbc3ea61b4840cffb138149604309a06a200993c1f68934c9f28f84215f43ca"
	I1016 19:47:14.899888  503657 cri.go:89] found id: "eef613b6fb796e0cad4b501a6d0821685cd8a7c54283320e4f50f4d158511a2a"
	I1016 19:47:14.899922  503657 cri.go:89] found id: "188edef414d15f9fcd0a85fa49e7243fbf77dab45649e305a2e60a979dedd27f"
	I1016 19:47:14.899947  503657 cri.go:89] found id: "2921ea52af99aa969071fb411fb52ba0f384fcc606004df4ff328bb7b0e640a5"
	I1016 19:47:14.899966  503657 cri.go:89] found id: "f415a5edf62f2fed33a35088647cc0f9936a583cf2985d885edf35900733bab2"
	I1016 19:47:14.899986  503657 cri.go:89] found id: "7a3b24f9c4c6aafecdde8d6b650ec0da77e3d7b5505503d38459f34464dc2a07"
	I1016 19:47:14.900006  503657 cri.go:89] found id: "a3f7185e8b7d30b96feaff04a980ad8d52b0865f5c6a2ae6f3ecc05241267bce"
	I1016 19:47:14.900039  503657 cri.go:89] found id: "b93060d3e7af49e77f76a0c238af703a0b5bd02650bbb1ff9d0a84489b5d595b"
	I1016 19:47:14.900065  503657 cri.go:89] found id: "6c012908584051b30602aa87822b512b418fdd18370e18b61ac73fdae4230834"
	I1016 19:47:14.900086  503657 cri.go:89] found id: ""
	I1016 19:47:14.900168  503657 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 19:47:14.918892  503657 out.go:203] 
	W1016 19:47:14.922035  503657 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:47:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:47:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 19:47:14.922239  503657 out.go:285] * 
	* 
	W1016 19:47:14.929630  503657 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 19:47:14.933419  503657 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-850436 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-850436
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-850436:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a",
	        "Created": "2025-10-16T19:44:20.385325839Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 500853,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T19:46:06.37370582Z",
	            "FinishedAt": "2025-10-16T19:46:05.37115607Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a/hostname",
	        "HostsPath": "/var/lib/docker/containers/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a/hosts",
	        "LogPath": "/var/lib/docker/containers/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a-json.log",
	        "Name": "/default-k8s-diff-port-850436",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-850436:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-850436",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a",
	                "LowerDir": "/var/lib/docker/overlay2/704a7d346d8fb60187e66a824bc70cd63e48122ca5c9005a5543db75cf0cedf3-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/704a7d346d8fb60187e66a824bc70cd63e48122ca5c9005a5543db75cf0cedf3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/704a7d346d8fb60187e66a824bc70cd63e48122ca5c9005a5543db75cf0cedf3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/704a7d346d8fb60187e66a824bc70cd63e48122ca5c9005a5543db75cf0cedf3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-850436",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-850436/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-850436",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-850436",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-850436",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5767ea9451eb2eb1a968ca80105b894f8f4635ab08eb1ab992015d5a0c86f68a",
	            "SandboxKey": "/var/run/docker/netns/5767ea9451eb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-850436": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:f2:a8:96:1e:7d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "12c5ab8893cdac2531939d281a38b055f53ba9453adc3d59ffb5147c0257d0fe",
	                    "EndpointID": "51da62dbf992f5fb8c56e483b09a48bcad10067ff840cef2fc4060d2ea95d292",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-850436",
	                        "4aa7104008e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-850436 -n default-k8s-diff-port-850436
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-850436 -n default-k8s-diff-port-850436: exit status 2 (491.275482ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-850436 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-850436 logs -n 25: (1.61309202s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-225696                                                                                                                                                                                                                          │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p no-preload-225696                                                                                                                                                                                                                          │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p disable-driver-mounts-031282                                                                                                                                                                                                               │ disable-driver-mounts-031282 │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ start   │ -p default-k8s-diff-port-850436 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:45 UTC │
	│ image   │ embed-certs-751669 image list --format=json                                                                                                                                                                                                   │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ pause   │ -p embed-certs-751669 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	│ delete  │ -p embed-certs-751669                                                                                                                                                                                                                         │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p embed-certs-751669                                                                                                                                                                                                                         │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ start   │ -p newest-cni-408495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:45 UTC │
	│ addons  │ enable metrics-server -p newest-cni-408495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │                     │
	│ stop    │ -p newest-cni-408495 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ addons  │ enable dashboard -p newest-cni-408495 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ start   │ -p newest-cni-408495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ image   │ newest-cni-408495 image list --format=json                                                                                                                                                                                                    │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ pause   │ -p newest-cni-408495 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │                     │
	│ delete  │ -p newest-cni-408495                                                                                                                                                                                                                          │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ delete  │ -p newest-cni-408495                                                                                                                                                                                                                          │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ start   │ -p auto-078761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-078761                  │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:47 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-850436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-850436 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:46 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-850436 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:46 UTC │ 16 Oct 25 19:46 UTC │
	│ start   │ -p default-k8s-diff-port-850436 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:46 UTC │ 16 Oct 25 19:47 UTC │
	│ ssh     │ -p auto-078761 pgrep -a kubelet                                                                                                                                                                                                               │ auto-078761                  │ jenkins │ v1.37.0 │ 16 Oct 25 19:47 UTC │ 16 Oct 25 19:47 UTC │
	│ image   │ default-k8s-diff-port-850436 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:47 UTC │ 16 Oct 25 19:47 UTC │
	│ pause   │ -p default-k8s-diff-port-850436 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 19:46:05
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 19:46:05.994198  500720 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:46:05.994721  500720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:46:05.994753  500720 out.go:374] Setting ErrFile to fd 2...
	I1016 19:46:05.994772  500720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:46:05.995061  500720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:46:05.995473  500720 out.go:368] Setting JSON to false
	I1016 19:46:05.996421  500720 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8895,"bootTime":1760635071,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 19:46:05.996513  500720 start.go:141] virtualization:  
	I1016 19:46:06.001381  500720 out.go:179] * [default-k8s-diff-port-850436] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 19:46:06.004494  500720 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 19:46:06.004555  500720 notify.go:220] Checking for updates...
	I1016 19:46:06.012093  500720 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 19:46:06.015246  500720 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:46:06.018158  500720 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 19:46:06.021112  500720 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 19:46:06.024120  500720 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 19:46:06.027585  500720 config.go:182] Loaded profile config "default-k8s-diff-port-850436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:46:06.028147  500720 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 19:46:06.061048  500720 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 19:46:06.061187  500720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:46:06.170912  500720 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-16 19:46:06.144094854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:46:06.171028  500720 docker.go:318] overlay module found
	I1016 19:46:06.174176  500720 out.go:179] * Using the docker driver based on existing profile
	I1016 19:46:06.177043  500720 start.go:305] selected driver: docker
	I1016 19:46:06.177062  500720 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-850436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-850436 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:46:06.177190  500720 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 19:46:06.177936  500720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:46:06.263017  500720 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-16 19:46:06.247993485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:46:06.263351  500720 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:46:06.263386  500720 cni.go:84] Creating CNI manager for ""
	I1016 19:46:06.263447  500720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:46:06.263489  500720 start.go:349] cluster config:
	{Name:default-k8s-diff-port-850436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-850436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:46:06.266882  500720 out.go:179] * Starting "default-k8s-diff-port-850436" primary control-plane node in "default-k8s-diff-port-850436" cluster
	I1016 19:46:06.269884  500720 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 19:46:06.272835  500720 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 19:46:06.275705  500720 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:46:06.275777  500720 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 19:46:06.275788  500720 cache.go:58] Caching tarball of preloaded images
	I1016 19:46:06.275872  500720 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 19:46:06.275881  500720 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 19:46:06.275988  500720 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/config.json ...
	I1016 19:46:06.276202  500720 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 19:46:06.297958  500720 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 19:46:06.297978  500720 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 19:46:06.297998  500720 cache.go:232] Successfully downloaded all kic artifacts
	I1016 19:46:06.298021  500720 start.go:360] acquireMachinesLock for default-k8s-diff-port-850436: {Name:mk7e6cd57751a3c09c0a04e7fccd20808ff22477 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:46:06.298073  500720 start.go:364] duration metric: took 35.816µs to acquireMachinesLock for "default-k8s-diff-port-850436"
	I1016 19:46:06.298092  500720 start.go:96] Skipping create...Using existing machine configuration
	I1016 19:46:06.298098  500720 fix.go:54] fixHost starting: 
	I1016 19:46:06.298356  500720 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:46:06.330530  500720 fix.go:112] recreateIfNeeded on default-k8s-diff-port-850436: state=Stopped err=<nil>
	W1016 19:46:06.330556  500720 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 19:46:05.244987  498106 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 19:46:05.245321  498106 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 19:46:05.780075  498106 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 19:46:06.244756  498106 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 19:46:08.105390  498106 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 19:46:08.727733  498106 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 19:46:09.135343  498106 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 19:46:09.136231  498106 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 19:46:09.139022  498106 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 19:46:09.142842  498106 out.go:252]   - Booting up control plane ...
	I1016 19:46:09.142949  498106 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 19:46:09.143030  498106 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 19:46:09.143101  498106 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 19:46:09.161878  498106 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 19:46:09.162127  498106 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 19:46:09.170657  498106 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 19:46:09.171160  498106 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 19:46:09.171231  498106 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 19:46:09.305076  498106 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 19:46:09.305266  498106 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 19:46:09.819168  498106 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 511.110613ms
	I1016 19:46:09.820266  498106 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 19:46:09.820998  498106 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1016 19:46:09.821335  498106 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 19:46:09.822192  498106 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 19:46:06.333663  500720 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-850436" ...
	I1016 19:46:06.333757  500720 cli_runner.go:164] Run: docker start default-k8s-diff-port-850436
	I1016 19:46:06.646977  500720 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:46:06.669604  500720 kic.go:430] container "default-k8s-diff-port-850436" state is running.
	I1016 19:46:06.670224  500720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-850436
	I1016 19:46:06.702833  500720 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/config.json ...
	I1016 19:46:06.703064  500720 machine.go:93] provisionDockerMachine start ...
	I1016 19:46:06.703129  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:06.738325  500720 main.go:141] libmachine: Using SSH client type: native
	I1016 19:46:06.738641  500720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1016 19:46:06.738659  500720 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 19:46:06.739830  500720 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 19:46:09.908959  500720 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-850436
	
	I1016 19:46:09.908996  500720 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-850436"
	I1016 19:46:09.909083  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:09.928439  500720 main.go:141] libmachine: Using SSH client type: native
	I1016 19:46:09.928745  500720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1016 19:46:09.928764  500720 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-850436 && echo "default-k8s-diff-port-850436" | sudo tee /etc/hostname
	I1016 19:46:10.091473  500720 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-850436
	
	I1016 19:46:10.091570  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:10.110901  500720 main.go:141] libmachine: Using SSH client type: native
	I1016 19:46:10.111224  500720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1016 19:46:10.111247  500720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-850436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-850436/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-850436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 19:46:10.271166  500720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 19:46:10.271220  500720 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 19:46:10.271243  500720 ubuntu.go:190] setting up certificates
	I1016 19:46:10.271252  500720 provision.go:84] configureAuth start
	I1016 19:46:10.271316  500720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-850436
	I1016 19:46:10.301446  500720 provision.go:143] copyHostCerts
	I1016 19:46:10.301504  500720 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 19:46:10.301521  500720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 19:46:10.301574  500720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 19:46:10.301657  500720 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 19:46:10.301662  500720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 19:46:10.301685  500720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 19:46:10.301738  500720 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 19:46:10.301743  500720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 19:46:10.301765  500720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 19:46:10.301809  500720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-850436 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-850436 localhost minikube]
	I1016 19:46:10.906745  500720 provision.go:177] copyRemoteCerts
	I1016 19:46:10.906817  500720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 19:46:10.906868  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:10.924623  500720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:46:11.035414  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 19:46:11.062524  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 19:46:11.092793  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1016 19:46:11.128753  500720 provision.go:87] duration metric: took 857.473855ms to configureAuth
	I1016 19:46:11.128781  500720 ubuntu.go:206] setting minikube options for container-runtime
	I1016 19:46:11.128996  500720 config.go:182] Loaded profile config "default-k8s-diff-port-850436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:46:11.129114  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:11.157919  500720 main.go:141] libmachine: Using SSH client type: native
	I1016 19:46:11.158248  500720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1016 19:46:11.158271  500720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 19:46:11.578726  500720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 19:46:11.578805  500720 machine.go:96] duration metric: took 4.875718363s to provisionDockerMachine
	I1016 19:46:11.578831  500720 start.go:293] postStartSetup for "default-k8s-diff-port-850436" (driver="docker")
	I1016 19:46:11.578873  500720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 19:46:11.578971  500720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 19:46:11.579050  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:11.602079  500720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:46:11.731070  500720 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 19:46:11.734861  500720 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 19:46:11.734892  500720 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 19:46:11.734904  500720 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 19:46:11.734967  500720 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 19:46:11.735055  500720 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 19:46:11.735158  500720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 19:46:11.749745  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:46:11.779701  500720 start.go:296] duration metric: took 200.839355ms for postStartSetup
	I1016 19:46:11.779785  500720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 19:46:11.779849  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:11.806758  500720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:46:11.934167  500720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 19:46:11.943485  500720 fix.go:56] duration metric: took 5.645380335s for fixHost
	I1016 19:46:11.943510  500720 start.go:83] releasing machines lock for "default-k8s-diff-port-850436", held for 5.64542836s
	I1016 19:46:11.943592  500720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-850436
	I1016 19:46:11.977077  500720 ssh_runner.go:195] Run: cat /version.json
	I1016 19:46:11.977128  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:11.977401  500720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 19:46:11.977448  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:12.025394  500720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:46:12.027338  500720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:46:12.167135  500720 ssh_runner.go:195] Run: systemctl --version
	I1016 19:46:12.317939  500720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 19:46:12.403702  500720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 19:46:12.408788  500720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 19:46:12.408951  500720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 19:46:12.422752  500720 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 19:46:12.422826  500720 start.go:495] detecting cgroup driver to use...
	I1016 19:46:12.422921  500720 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 19:46:12.423010  500720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 19:46:12.446091  500720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 19:46:12.467772  500720 docker.go:218] disabling cri-docker service (if available) ...
	I1016 19:46:12.467895  500720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 19:46:12.495198  500720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 19:46:12.523605  500720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 19:46:12.724657  500720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 19:46:12.938886  500720 docker.go:234] disabling docker service ...
	I1016 19:46:12.939036  500720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 19:46:12.955001  500720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 19:46:12.983308  500720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 19:46:13.194958  500720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 19:46:13.410359  500720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 19:46:13.435531  500720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 19:46:13.460543  500720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 19:46:13.460612  500720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:46:13.477764  500720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 19:46:13.477836  500720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:46:13.505844  500720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:46:13.531036  500720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:46:13.542709  500720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 19:46:13.562501  500720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:46:13.584765  500720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:46:13.600593  500720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:46:13.619721  500720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 19:46:13.640385  500720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 19:46:13.655556  500720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:46:13.870522  500720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 19:46:14.065487  500720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 19:46:14.065650  500720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 19:46:14.077769  500720 start.go:563] Will wait 60s for crictl version
	I1016 19:46:14.077892  500720 ssh_runner.go:195] Run: which crictl
	I1016 19:46:14.085734  500720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 19:46:14.146611  500720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 19:46:14.146770  500720 ssh_runner.go:195] Run: crio --version
	I1016 19:46:14.191994  500720 ssh_runner.go:195] Run: crio --version
	I1016 19:46:14.254805  500720 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 19:46:14.257868  500720 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-850436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:46:14.285452  500720 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1016 19:46:14.289521  500720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:46:14.309558  500720 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-850436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-850436 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 19:46:14.309666  500720 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:46:14.309717  500720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:46:14.363542  500720 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:46:14.363561  500720 crio.go:433] Images already preloaded, skipping extraction
	I1016 19:46:14.363616  500720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:46:14.422709  500720 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:46:14.422728  500720 cache_images.go:85] Images are preloaded, skipping loading
	I1016 19:46:14.422735  500720 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1016 19:46:14.422833  500720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-850436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-850436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 19:46:14.422906  500720 ssh_runner.go:195] Run: crio config
	I1016 19:46:14.574993  500720 cni.go:84] Creating CNI manager for ""
	I1016 19:46:14.575067  500720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:46:14.575103  500720 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 19:46:14.575164  500720 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-850436 NodeName:default-k8s-diff-port-850436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 19:46:14.575353  500720 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-850436"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 19:46:14.575482  500720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 19:46:14.583397  500720 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 19:46:14.583516  500720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 19:46:14.599185  500720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1016 19:46:14.620673  500720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 19:46:14.639571  500720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1016 19:46:14.659004  500720 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1016 19:46:14.663132  500720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:46:14.677721  500720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:46:14.896848  500720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:46:14.931614  500720 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436 for IP: 192.168.76.2
	I1016 19:46:14.931685  500720 certs.go:195] generating shared ca certs ...
	I1016 19:46:14.931716  500720 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:46:14.931888  500720 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 19:46:14.931963  500720 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 19:46:14.931986  500720 certs.go:257] generating profile certs ...
	I1016 19:46:14.932135  500720 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/client.key
	I1016 19:46:14.932266  500720 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/apiserver.key.1d408be1
	I1016 19:46:14.932356  500720 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/proxy-client.key
	I1016 19:46:14.932516  500720 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 19:46:14.932580  500720 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 19:46:14.932606  500720 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 19:46:14.932670  500720 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 19:46:14.932735  500720 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 19:46:14.932798  500720 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 19:46:14.932889  500720 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:46:14.933703  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 19:46:14.969004  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 19:46:15.008355  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 19:46:15.036614  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 19:46:15.075054  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1016 19:46:15.115131  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 19:46:15.163275  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 19:46:15.202312  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 19:46:15.253927  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 19:46:15.302353  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 19:46:15.365294  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 19:46:15.408332  500720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 19:46:15.431308  500720 ssh_runner.go:195] Run: openssl version
	I1016 19:46:15.446437  500720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 19:46:15.457806  500720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:46:15.461644  500720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:46:15.461767  500720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:46:15.540276  500720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 19:46:15.551771  500720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 19:46:15.562888  500720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 19:46:15.567017  500720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 19:46:15.567143  500720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 19:46:15.609551  500720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 19:46:15.623673  500720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 19:46:15.637203  500720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 19:46:15.641464  500720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 19:46:15.641581  500720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 19:46:15.706112  500720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 19:46:15.718341  500720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 19:46:15.722803  500720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 19:46:15.782311  500720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 19:46:15.877470  500720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 19:46:15.971529  500720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 19:46:16.151141  500720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 19:46:16.232560  500720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 19:46:16.354984  500720 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-850436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-850436 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:46:16.355128  500720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 19:46:16.355221  500720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 19:46:16.489840  500720 cri.go:89] found id: "2921ea52af99aa969071fb411fb52ba0f384fcc606004df4ff328bb7b0e640a5"
	I1016 19:46:16.489912  500720 cri.go:89] found id: "f415a5edf62f2fed33a35088647cc0f9936a583cf2985d885edf35900733bab2"
	I1016 19:46:16.489948  500720 cri.go:89] found id: "7a3b24f9c4c6aafecdde8d6b650ec0da77e3d7b5505503d38459f34464dc2a07"
	I1016 19:46:16.489971  500720 cri.go:89] found id: "a3f7185e8b7d30b96feaff04a980ad8d52b0865f5c6a2ae6f3ecc05241267bce"
	I1016 19:46:16.489991  500720 cri.go:89] found id: ""
	I1016 19:46:16.490071  500720 ssh_runner.go:195] Run: sudo runc list -f json
	W1016 19:46:16.522250  500720 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:46:16Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:46:16.522409  500720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 19:46:16.546271  500720 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 19:46:16.546344  500720 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 19:46:16.546442  500720 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 19:46:16.570583  500720 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 19:46:16.571084  500720 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-850436" does not appear in /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:46:16.571249  500720 kubeconfig.go:62] /home/jenkins/minikube-integration/21738-288457/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-850436" cluster setting kubeconfig missing "default-k8s-diff-port-850436" context setting]
	I1016 19:46:16.571599  500720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:46:16.573363  500720 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 19:46:16.594520  500720 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1016 19:46:16.594600  500720 kubeadm.go:601] duration metric: took 48.235824ms to restartPrimaryControlPlane
	I1016 19:46:16.594622  500720 kubeadm.go:402] duration metric: took 239.648351ms to StartCluster
	I1016 19:46:16.594668  500720 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:46:16.594763  500720 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:46:16.595537  500720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:46:16.595802  500720 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:46:16.596174  500720 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 19:46:16.596252  500720 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-850436"
	I1016 19:46:16.596265  500720 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-850436"
	W1016 19:46:16.596272  500720 addons.go:247] addon storage-provisioner should already be in state true
	I1016 19:46:16.596292  500720 host.go:66] Checking if "default-k8s-diff-port-850436" exists ...
	I1016 19:46:16.596889  500720 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:46:16.597300  500720 config.go:182] Loaded profile config "default-k8s-diff-port-850436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:46:16.597408  500720 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-850436"
	I1016 19:46:16.597437  500720 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-850436"
	W1016 19:46:16.597456  500720 addons.go:247] addon dashboard should already be in state true
	I1016 19:46:16.597504  500720 host.go:66] Checking if "default-k8s-diff-port-850436" exists ...
	I1016 19:46:16.597976  500720 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:46:16.598509  500720 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-850436"
	I1016 19:46:16.598537  500720 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-850436"
	I1016 19:46:16.598817  500720 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:46:16.602490  500720 out.go:179] * Verifying Kubernetes components...
	I1016 19:46:16.607439  500720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:46:16.658732  500720 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1016 19:46:16.661747  500720 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 19:46:16.664110  500720 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-850436"
	W1016 19:46:16.664130  500720 addons.go:247] addon default-storageclass should already be in state true
	I1016 19:46:16.664158  500720 host.go:66] Checking if "default-k8s-diff-port-850436" exists ...
	I1016 19:46:16.664572  500720 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:46:16.666323  500720 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:46:16.666344  500720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 19:46:16.666399  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:16.666538  500720 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1016 19:46:16.190417  498106 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 6.368014024s
	I1016 19:46:18.842582  498106 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 9.018079962s
	I1016 19:46:20.324842  498106 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.503018435s
	I1016 19:46:20.346274  498106 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 19:46:20.368299  498106 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 19:46:20.387412  498106 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 19:46:20.387758  498106 kubeadm.go:318] [mark-control-plane] Marking the node auto-078761 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 19:46:20.409908  498106 kubeadm.go:318] [bootstrap-token] Using token: hj4xzy.uo6gwxqsrkjkbkd0
	I1016 19:46:16.669492  500720 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1016 19:46:16.669516  500720 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1016 19:46:16.669588  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:16.712710  500720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:46:16.717268  500720 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 19:46:16.717288  500720 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 19:46:16.717353  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:16.735462  500720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:46:16.752579  500720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:46:17.104998  500720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:46:17.185925  500720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:46:17.231700  500720 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1016 19:46:17.231726  500720 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1016 19:46:17.247361  500720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 19:46:17.346877  500720 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1016 19:46:17.346898  500720 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1016 19:46:17.441025  500720 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1016 19:46:17.441045  500720 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1016 19:46:17.691810  500720 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1016 19:46:17.691829  500720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1016 19:46:17.774364  500720 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1016 19:46:17.774386  500720 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1016 19:46:17.818174  500720 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1016 19:46:17.818239  500720 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1016 19:46:17.871045  500720 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1016 19:46:17.871109  500720 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1016 19:46:17.922580  500720 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1016 19:46:17.922645  500720 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1016 19:46:17.972266  500720 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1016 19:46:17.972333  500720 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1016 19:46:18.009925  500720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1016 19:46:20.412777  498106 out.go:252]   - Configuring RBAC rules ...
	I1016 19:46:20.412946  498106 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 19:46:20.421775  498106 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 19:46:20.441189  498106 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 19:46:20.445608  498106 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 19:46:20.452571  498106 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 19:46:20.457100  498106 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 19:46:20.739479  498106 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 19:46:21.360262  498106 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 19:46:21.738981  498106 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 19:46:21.741920  498106 kubeadm.go:318] 
	I1016 19:46:21.741999  498106 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 19:46:21.742006  498106 kubeadm.go:318] 
	I1016 19:46:21.742085  498106 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 19:46:21.742090  498106 kubeadm.go:318] 
	I1016 19:46:21.742116  498106 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 19:46:21.742655  498106 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 19:46:21.742727  498106 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 19:46:21.742734  498106 kubeadm.go:318] 
	I1016 19:46:21.742791  498106 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 19:46:21.742795  498106 kubeadm.go:318] 
	I1016 19:46:21.742851  498106 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 19:46:21.742858  498106 kubeadm.go:318] 
	I1016 19:46:21.742912  498106 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 19:46:21.742990  498106 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 19:46:21.743061  498106 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 19:46:21.743070  498106 kubeadm.go:318] 
	I1016 19:46:21.743472  498106 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 19:46:21.743560  498106 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 19:46:21.743565  498106 kubeadm.go:318] 
	I1016 19:46:21.743914  498106 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token hj4xzy.uo6gwxqsrkjkbkd0 \
	I1016 19:46:21.744027  498106 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:85a5ed38e8d77f9704e1d1edb99d03efd3921606f6351dbe0cfb02fc48526847 \
	I1016 19:46:21.744256  498106 kubeadm.go:318] 	--control-plane 
	I1016 19:46:21.744267  498106 kubeadm.go:318] 
	I1016 19:46:21.744643  498106 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 19:46:21.744653  498106 kubeadm.go:318] 
	I1016 19:46:21.744957  498106 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token hj4xzy.uo6gwxqsrkjkbkd0 \
	I1016 19:46:21.745347  498106 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:85a5ed38e8d77f9704e1d1edb99d03efd3921606f6351dbe0cfb02fc48526847 
	I1016 19:46:21.750769  498106 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1016 19:46:21.751119  498106 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1016 19:46:21.751247  498106 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1016 19:46:21.751257  498106 cni.go:84] Creating CNI manager for ""
	I1016 19:46:21.751264  498106 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:46:21.756796  498106 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 19:46:21.759672  498106 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 19:46:21.768346  498106 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 19:46:21.768365  498106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 19:46:21.802780  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 19:46:22.359535  498106 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 19:46:22.359670  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:46:22.359771  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-078761 minikube.k8s.io/updated_at=2025_10_16T19_46_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=auto-078761 minikube.k8s.io/primary=true
	I1016 19:46:22.655669  498106 ops.go:34] apiserver oom_adj: -16
	I1016 19:46:22.655781  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:46:23.156754  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:46:23.656321  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:46:24.155848  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:46:24.656257  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:46:26.190161  500720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.085126806s)
	I1016 19:46:26.190220  500720 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.004272204s)
	I1016 19:46:26.190249  500720 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-850436" to be "Ready" ...
	I1016 19:46:26.190580  500720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.943152314s)
	I1016 19:46:26.190857  500720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.180808521s)
	I1016 19:46:26.194075  500720 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-850436 addons enable metrics-server
	
	I1016 19:46:26.236480  500720 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1016 19:46:25.156279  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:46:25.656679  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:46:26.156202  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:46:26.655935  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:46:26.763014  498106 kubeadm.go:1113] duration metric: took 4.403387394s to wait for elevateKubeSystemPrivileges
	I1016 19:46:26.763040  498106 kubeadm.go:402] duration metric: took 27.111078057s to StartCluster
	I1016 19:46:26.763057  498106 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:46:26.763120  498106 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:46:26.764094  498106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:46:26.764320  498106 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:46:26.764411  498106 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 19:46:26.764652  498106 config.go:182] Loaded profile config "auto-078761": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:46:26.764700  498106 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 19:46:26.764766  498106 addons.go:69] Setting storage-provisioner=true in profile "auto-078761"
	I1016 19:46:26.764780  498106 addons.go:238] Setting addon storage-provisioner=true in "auto-078761"
	I1016 19:46:26.764816  498106 host.go:66] Checking if "auto-078761" exists ...
	I1016 19:46:26.765378  498106 addons.go:69] Setting default-storageclass=true in profile "auto-078761"
	I1016 19:46:26.765404  498106 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-078761"
	I1016 19:46:26.765655  498106 cli_runner.go:164] Run: docker container inspect auto-078761 --format={{.State.Status}}
	I1016 19:46:26.765933  498106 cli_runner.go:164] Run: docker container inspect auto-078761 --format={{.State.Status}}
	I1016 19:46:26.768082  498106 out.go:179] * Verifying Kubernetes components...
	I1016 19:46:26.771541  498106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:46:26.805507  498106 addons.go:238] Setting addon default-storageclass=true in "auto-078761"
	I1016 19:46:26.805553  498106 host.go:66] Checking if "auto-078761" exists ...
	I1016 19:46:26.805983  498106 cli_runner.go:164] Run: docker container inspect auto-078761 --format={{.State.Status}}
	I1016 19:46:26.823787  498106 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 19:46:26.826864  498106 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:46:26.826888  498106 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 19:46:26.826957  498106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-078761
	I1016 19:46:26.846988  498106 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 19:46:26.847008  498106 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 19:46:26.847072  498106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-078761
	I1016 19:46:26.877374  498106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/auto-078761/id_rsa Username:docker}
	I1016 19:46:26.880903  498106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/auto-078761/id_rsa Username:docker}
	I1016 19:46:27.257117  498106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:46:27.299502  498106 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 19:46:27.299695  498106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:46:27.339529  498106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 19:46:28.175598  498106 node_ready.go:35] waiting up to 15m0s for node "auto-078761" to be "Ready" ...
	I1016 19:46:28.175986  498106 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1016 19:46:28.229662  498106 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1016 19:46:28.232651  498106 addons.go:514] duration metric: took 1.467933946s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1016 19:46:28.679987  498106 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-078761" context rescaled to 1 replicas
	I1016 19:46:26.238820  500720 node_ready.go:49] node "default-k8s-diff-port-850436" is "Ready"
	I1016 19:46:26.238889  500720 node_ready.go:38] duration metric: took 48.618223ms for node "default-k8s-diff-port-850436" to be "Ready" ...
	I1016 19:46:26.238919  500720 api_server.go:52] waiting for apiserver process to appear ...
	I1016 19:46:26.239012  500720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 19:46:26.241903  500720 addons.go:514] duration metric: took 9.645710356s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1016 19:46:26.255905  500720 api_server.go:72] duration metric: took 9.660040651s to wait for apiserver process to appear ...
	I1016 19:46:26.255980  500720 api_server.go:88] waiting for apiserver healthz status ...
	I1016 19:46:26.256012  500720 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1016 19:46:26.264842  500720 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1016 19:46:26.270159  500720 api_server.go:141] control plane version: v1.34.1
	I1016 19:46:26.270234  500720 api_server.go:131] duration metric: took 14.23417ms to wait for apiserver health ...
	I1016 19:46:26.270260  500720 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 19:46:26.280292  500720 system_pods.go:59] 8 kube-system pods found
	I1016 19:46:26.280380  500720 system_pods.go:61] "coredns-66bc5c9577-vnm65" [448486e9-ec0e-40c3-b106-5199d6090906] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:46:26.280405  500720 system_pods.go:61] "etcd-default-k8s-diff-port-850436" [239f4f2b-4e12-47a6-83bb-86b0144b67fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 19:46:26.280447  500720 system_pods.go:61] "kindnet-x85fg" [d4767810-daa5-4517-ba09-8bf6504516b2] Running
	I1016 19:46:26.280476  500720 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850436" [58577b33-3ea0-4618-b42e-afadd777a45c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 19:46:26.280500  500720 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850436" [458d5d16-d6bc-4b97-94cc-0305f13a95a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 19:46:26.280522  500720 system_pods.go:61] "kube-proxy-2l5ck" [fb08d80e-eae2-4cfe-adec-7dff53b69338] Running
	I1016 19:46:26.280559  500720 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850436" [45fc8dad-2ab6-46df-b7f3-e4508cd3fc2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 19:46:26.280585  500720 system_pods.go:61] "storage-provisioner" [4d591848-c88d-48c6-9cb8-6c660c47d3c6] Running
	I1016 19:46:26.280607  500720 system_pods.go:74] duration metric: took 10.32895ms to wait for pod list to return data ...
	I1016 19:46:26.280628  500720 default_sa.go:34] waiting for default service account to be created ...
	I1016 19:46:26.283732  500720 default_sa.go:45] found service account: "default"
	I1016 19:46:26.283799  500720 default_sa.go:55] duration metric: took 3.149592ms for default service account to be created ...
	I1016 19:46:26.283822  500720 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 19:46:26.287835  500720 system_pods.go:86] 8 kube-system pods found
	I1016 19:46:26.287920  500720 system_pods.go:89] "coredns-66bc5c9577-vnm65" [448486e9-ec0e-40c3-b106-5199d6090906] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:46:26.287951  500720 system_pods.go:89] "etcd-default-k8s-diff-port-850436" [239f4f2b-4e12-47a6-83bb-86b0144b67fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 19:46:26.287990  500720 system_pods.go:89] "kindnet-x85fg" [d4767810-daa5-4517-ba09-8bf6504516b2] Running
	I1016 19:46:26.288020  500720 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-850436" [58577b33-3ea0-4618-b42e-afadd777a45c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 19:46:26.288043  500720 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-850436" [458d5d16-d6bc-4b97-94cc-0305f13a95a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 19:46:26.288065  500720 system_pods.go:89] "kube-proxy-2l5ck" [fb08d80e-eae2-4cfe-adec-7dff53b69338] Running
	I1016 19:46:26.288099  500720 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-850436" [45fc8dad-2ab6-46df-b7f3-e4508cd3fc2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 19:46:26.288124  500720 system_pods.go:89] "storage-provisioner" [4d591848-c88d-48c6-9cb8-6c660c47d3c6] Running
	I1016 19:46:26.288147  500720 system_pods.go:126] duration metric: took 4.306127ms to wait for k8s-apps to be running ...
	I1016 19:46:26.288168  500720 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 19:46:26.288251  500720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:46:26.303571  500720 system_svc.go:56] duration metric: took 15.395308ms WaitForService to wait for kubelet
	I1016 19:46:26.303642  500720 kubeadm.go:586] duration metric: took 9.707780653s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:46:26.303676  500720 node_conditions.go:102] verifying NodePressure condition ...
	I1016 19:46:26.307157  500720 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 19:46:26.307231  500720 node_conditions.go:123] node cpu capacity is 2
	I1016 19:46:26.307257  500720 node_conditions.go:105] duration metric: took 3.559615ms to run NodePressure ...
	I1016 19:46:26.307281  500720 start.go:241] waiting for startup goroutines ...
	I1016 19:46:26.307314  500720 start.go:246] waiting for cluster config update ...
	I1016 19:46:26.307346  500720 start.go:255] writing updated cluster config ...
	I1016 19:46:26.307674  500720 ssh_runner.go:195] Run: rm -f paused
	I1016 19:46:26.312185  500720 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:46:26.316080  500720 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vnm65" in "kube-system" namespace to be "Ready" or be gone ...
	W1016 19:46:28.385532  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:30.822875  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:30.179238  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:32.182406  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:34.678920  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:32.823659  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:35.322663  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:37.178575  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:39.180062  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:37.821467  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:40.322447  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:41.679475  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:43.679843  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:42.821425  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:44.822382  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:46.179357  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:48.179456  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:47.321562  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:49.322204  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:50.179832  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:52.179938  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:54.180046  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:51.322307  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:53.322368  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:55.821828  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:56.678439  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:59.178933  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:58.322333  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	I1016 19:46:59.322151  500720 pod_ready.go:94] pod "coredns-66bc5c9577-vnm65" is "Ready"
	I1016 19:46:59.322184  500720 pod_ready.go:86] duration metric: took 33.006040892s for pod "coredns-66bc5c9577-vnm65" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:46:59.325182  500720 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:46:59.330217  500720 pod_ready.go:94] pod "etcd-default-k8s-diff-port-850436" is "Ready"
	I1016 19:46:59.330242  500720 pod_ready.go:86] duration metric: took 5.036196ms for pod "etcd-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:46:59.332793  500720 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:46:59.338352  500720 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-850436" is "Ready"
	I1016 19:46:59.338381  500720 pod_ready.go:86] duration metric: took 5.56362ms for pod "kube-apiserver-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:46:59.340607  500720 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:46:59.521164  500720 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-850436" is "Ready"
	I1016 19:46:59.521189  500720 pod_ready.go:86] duration metric: took 180.55668ms for pod "kube-controller-manager-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:46:59.719984  500720 pod_ready.go:83] waiting for pod "kube-proxy-2l5ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:00.122846  500720 pod_ready.go:94] pod "kube-proxy-2l5ck" is "Ready"
	I1016 19:47:00.122875  500720 pod_ready.go:86] duration metric: took 402.861351ms for pod "kube-proxy-2l5ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:00.322437  500720 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:00.720211  500720 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-850436" is "Ready"
	I1016 19:47:00.720243  500720 pod_ready.go:86] duration metric: took 397.774834ms for pod "kube-scheduler-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:00.720255  500720 pod_ready.go:40] duration metric: took 34.407993349s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:47:00.780381  500720 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1016 19:47:00.784141  500720 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-850436" cluster and "default" namespace by default
	W1016 19:47:01.182047  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:47:03.678933  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:47:05.679047  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:47:08.179396  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	I1016 19:47:08.679578  498106 node_ready.go:49] node "auto-078761" is "Ready"
	I1016 19:47:08.679612  498106 node_ready.go:38] duration metric: took 40.503940159s for node "auto-078761" to be "Ready" ...
	I1016 19:47:08.679626  498106 api_server.go:52] waiting for apiserver process to appear ...
	I1016 19:47:08.679686  498106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 19:47:08.691721  498106 api_server.go:72] duration metric: took 41.927364569s to wait for apiserver process to appear ...
	I1016 19:47:08.691757  498106 api_server.go:88] waiting for apiserver healthz status ...
	I1016 19:47:08.691778  498106 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:47:08.699986  498106 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1016 19:47:08.701020  498106 api_server.go:141] control plane version: v1.34.1
	I1016 19:47:08.701043  498106 api_server.go:131] duration metric: took 9.278816ms to wait for apiserver health ...
	I1016 19:47:08.701052  498106 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 19:47:08.704165  498106 system_pods.go:59] 8 kube-system pods found
	I1016 19:47:08.704203  498106 system_pods.go:61] "coredns-66bc5c9577-46x84" [a046c5b5-2f1a-41a3-a08b-23ce5250dfe3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:47:08.704210  498106 system_pods.go:61] "etcd-auto-078761" [eee7cc6b-5eca-4788-b212-06dcf44d0616] Running
	I1016 19:47:08.704216  498106 system_pods.go:61] "kindnet-2rx9m" [31294f09-d843-4736-a5a9-488fff4ebd9c] Running
	I1016 19:47:08.704221  498106 system_pods.go:61] "kube-apiserver-auto-078761" [6d8f96a0-9aa7-4228-9d93-1a965b823e49] Running
	I1016 19:47:08.704225  498106 system_pods.go:61] "kube-controller-manager-auto-078761" [59d840b0-351e-4291-b424-a73f03080ffd] Running
	I1016 19:47:08.704241  498106 system_pods.go:61] "kube-proxy-x4869" [a7c82db2-e6f9-46b6-bfc2-be2f6e45d7f4] Running
	I1016 19:47:08.704249  498106 system_pods.go:61] "kube-scheduler-auto-078761" [a42dae50-0a9e-488f-9c0c-6d0a85a6a855] Running
	I1016 19:47:08.704255  498106 system_pods.go:61] "storage-provisioner" [2e1d7a3c-fcf5-438a-ac73-359df1c527b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:47:08.704262  498106 system_pods.go:74] duration metric: took 3.203567ms to wait for pod list to return data ...
	I1016 19:47:08.704273  498106 default_sa.go:34] waiting for default service account to be created ...
	I1016 19:47:08.706790  498106 default_sa.go:45] found service account: "default"
	I1016 19:47:08.706814  498106 default_sa.go:55] duration metric: took 2.534668ms for default service account to be created ...
	I1016 19:47:08.706823  498106 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 19:47:08.715224  498106 system_pods.go:86] 8 kube-system pods found
	I1016 19:47:08.715260  498106 system_pods.go:89] "coredns-66bc5c9577-46x84" [a046c5b5-2f1a-41a3-a08b-23ce5250dfe3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:47:08.715267  498106 system_pods.go:89] "etcd-auto-078761" [eee7cc6b-5eca-4788-b212-06dcf44d0616] Running
	I1016 19:47:08.715277  498106 system_pods.go:89] "kindnet-2rx9m" [31294f09-d843-4736-a5a9-488fff4ebd9c] Running
	I1016 19:47:08.715282  498106 system_pods.go:89] "kube-apiserver-auto-078761" [6d8f96a0-9aa7-4228-9d93-1a965b823e49] Running
	I1016 19:47:08.715310  498106 system_pods.go:89] "kube-controller-manager-auto-078761" [59d840b0-351e-4291-b424-a73f03080ffd] Running
	I1016 19:47:08.715324  498106 system_pods.go:89] "kube-proxy-x4869" [a7c82db2-e6f9-46b6-bfc2-be2f6e45d7f4] Running
	I1016 19:47:08.715329  498106 system_pods.go:89] "kube-scheduler-auto-078761" [a42dae50-0a9e-488f-9c0c-6d0a85a6a855] Running
	I1016 19:47:08.715335  498106 system_pods.go:89] "storage-provisioner" [2e1d7a3c-fcf5-438a-ac73-359df1c527b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:47:08.715365  498106 retry.go:31] will retry after 278.562904ms: missing components: kube-dns
	I1016 19:47:09.001085  498106 system_pods.go:86] 8 kube-system pods found
	I1016 19:47:09.001122  498106 system_pods.go:89] "coredns-66bc5c9577-46x84" [a046c5b5-2f1a-41a3-a08b-23ce5250dfe3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:47:09.001129  498106 system_pods.go:89] "etcd-auto-078761" [eee7cc6b-5eca-4788-b212-06dcf44d0616] Running
	I1016 19:47:09.001180  498106 system_pods.go:89] "kindnet-2rx9m" [31294f09-d843-4736-a5a9-488fff4ebd9c] Running
	I1016 19:47:09.001186  498106 system_pods.go:89] "kube-apiserver-auto-078761" [6d8f96a0-9aa7-4228-9d93-1a965b823e49] Running
	I1016 19:47:09.001196  498106 system_pods.go:89] "kube-controller-manager-auto-078761" [59d840b0-351e-4291-b424-a73f03080ffd] Running
	I1016 19:47:09.001201  498106 system_pods.go:89] "kube-proxy-x4869" [a7c82db2-e6f9-46b6-bfc2-be2f6e45d7f4] Running
	I1016 19:47:09.001211  498106 system_pods.go:89] "kube-scheduler-auto-078761" [a42dae50-0a9e-488f-9c0c-6d0a85a6a855] Running
	I1016 19:47:09.001217  498106 system_pods.go:89] "storage-provisioner" [2e1d7a3c-fcf5-438a-ac73-359df1c527b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:47:09.001242  498106 retry.go:31] will retry after 340.546737ms: missing components: kube-dns
	I1016 19:47:09.347024  498106 system_pods.go:86] 8 kube-system pods found
	I1016 19:47:09.347068  498106 system_pods.go:89] "coredns-66bc5c9577-46x84" [a046c5b5-2f1a-41a3-a08b-23ce5250dfe3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:47:09.347077  498106 system_pods.go:89] "etcd-auto-078761" [eee7cc6b-5eca-4788-b212-06dcf44d0616] Running
	I1016 19:47:09.347084  498106 system_pods.go:89] "kindnet-2rx9m" [31294f09-d843-4736-a5a9-488fff4ebd9c] Running
	I1016 19:47:09.347094  498106 system_pods.go:89] "kube-apiserver-auto-078761" [6d8f96a0-9aa7-4228-9d93-1a965b823e49] Running
	I1016 19:47:09.347099  498106 system_pods.go:89] "kube-controller-manager-auto-078761" [59d840b0-351e-4291-b424-a73f03080ffd] Running
	I1016 19:47:09.347103  498106 system_pods.go:89] "kube-proxy-x4869" [a7c82db2-e6f9-46b6-bfc2-be2f6e45d7f4] Running
	I1016 19:47:09.347107  498106 system_pods.go:89] "kube-scheduler-auto-078761" [a42dae50-0a9e-488f-9c0c-6d0a85a6a855] Running
	I1016 19:47:09.347124  498106 system_pods.go:89] "storage-provisioner" [2e1d7a3c-fcf5-438a-ac73-359df1c527b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:47:09.347145  498106 retry.go:31] will retry after 339.55518ms: missing components: kube-dns
	I1016 19:47:09.690504  498106 system_pods.go:86] 8 kube-system pods found
	I1016 19:47:09.690542  498106 system_pods.go:89] "coredns-66bc5c9577-46x84" [a046c5b5-2f1a-41a3-a08b-23ce5250dfe3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:47:09.690549  498106 system_pods.go:89] "etcd-auto-078761" [eee7cc6b-5eca-4788-b212-06dcf44d0616] Running
	I1016 19:47:09.690555  498106 system_pods.go:89] "kindnet-2rx9m" [31294f09-d843-4736-a5a9-488fff4ebd9c] Running
	I1016 19:47:09.690559  498106 system_pods.go:89] "kube-apiserver-auto-078761" [6d8f96a0-9aa7-4228-9d93-1a965b823e49] Running
	I1016 19:47:09.690564  498106 system_pods.go:89] "kube-controller-manager-auto-078761" [59d840b0-351e-4291-b424-a73f03080ffd] Running
	I1016 19:47:09.690570  498106 system_pods.go:89] "kube-proxy-x4869" [a7c82db2-e6f9-46b6-bfc2-be2f6e45d7f4] Running
	I1016 19:47:09.690576  498106 system_pods.go:89] "kube-scheduler-auto-078761" [a42dae50-0a9e-488f-9c0c-6d0a85a6a855] Running
	I1016 19:47:09.690587  498106 system_pods.go:89] "storage-provisioner" [2e1d7a3c-fcf5-438a-ac73-359df1c527b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:47:09.690603  498106 retry.go:31] will retry after 462.212589ms: missing components: kube-dns
	I1016 19:47:10.157567  498106 system_pods.go:86] 8 kube-system pods found
	I1016 19:47:10.157644  498106 system_pods.go:89] "coredns-66bc5c9577-46x84" [a046c5b5-2f1a-41a3-a08b-23ce5250dfe3] Running
	I1016 19:47:10.157660  498106 system_pods.go:89] "etcd-auto-078761" [eee7cc6b-5eca-4788-b212-06dcf44d0616] Running
	I1016 19:47:10.157665  498106 system_pods.go:89] "kindnet-2rx9m" [31294f09-d843-4736-a5a9-488fff4ebd9c] Running
	I1016 19:47:10.157669  498106 system_pods.go:89] "kube-apiserver-auto-078761" [6d8f96a0-9aa7-4228-9d93-1a965b823e49] Running
	I1016 19:47:10.157673  498106 system_pods.go:89] "kube-controller-manager-auto-078761" [59d840b0-351e-4291-b424-a73f03080ffd] Running
	I1016 19:47:10.157681  498106 system_pods.go:89] "kube-proxy-x4869" [a7c82db2-e6f9-46b6-bfc2-be2f6e45d7f4] Running
	I1016 19:47:10.157695  498106 system_pods.go:89] "kube-scheduler-auto-078761" [a42dae50-0a9e-488f-9c0c-6d0a85a6a855] Running
	I1016 19:47:10.157699  498106 system_pods.go:89] "storage-provisioner" [2e1d7a3c-fcf5-438a-ac73-359df1c527b8] Running
	I1016 19:47:10.157731  498106 system_pods.go:126] duration metric: took 1.450901957s to wait for k8s-apps to be running ...
	I1016 19:47:10.157751  498106 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 19:47:10.157852  498106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:47:10.173822  498106 system_svc.go:56] duration metric: took 16.055792ms WaitForService to wait for kubelet
	I1016 19:47:10.173852  498106 kubeadm.go:586] duration metric: took 43.409499747s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:47:10.173870  498106 node_conditions.go:102] verifying NodePressure condition ...
	I1016 19:47:10.182529  498106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 19:47:10.182566  498106 node_conditions.go:123] node cpu capacity is 2
	I1016 19:47:10.182590  498106 node_conditions.go:105] duration metric: took 8.705082ms to run NodePressure ...
	I1016 19:47:10.182621  498106 start.go:241] waiting for startup goroutines ...
	I1016 19:47:10.182637  498106 start.go:246] waiting for cluster config update ...
	I1016 19:47:10.182648  498106 start.go:255] writing updated cluster config ...
	I1016 19:47:10.183033  498106 ssh_runner.go:195] Run: rm -f paused
	I1016 19:47:10.187466  498106 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:47:10.191451  498106 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-46x84" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:10.196484  498106 pod_ready.go:94] pod "coredns-66bc5c9577-46x84" is "Ready"
	I1016 19:47:10.196518  498106 pod_ready.go:86] duration metric: took 5.039224ms for pod "coredns-66bc5c9577-46x84" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:10.198965  498106 pod_ready.go:83] waiting for pod "etcd-auto-078761" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:10.203679  498106 pod_ready.go:94] pod "etcd-auto-078761" is "Ready"
	I1016 19:47:10.203706  498106 pod_ready.go:86] duration metric: took 4.715963ms for pod "etcd-auto-078761" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:10.206299  498106 pod_ready.go:83] waiting for pod "kube-apiserver-auto-078761" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:10.210823  498106 pod_ready.go:94] pod "kube-apiserver-auto-078761" is "Ready"
	I1016 19:47:10.210855  498106 pod_ready.go:86] duration metric: took 4.529154ms for pod "kube-apiserver-auto-078761" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:10.213471  498106 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-078761" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:10.592153  498106 pod_ready.go:94] pod "kube-controller-manager-auto-078761" is "Ready"
	I1016 19:47:10.592177  498106 pod_ready.go:86] duration metric: took 378.682788ms for pod "kube-controller-manager-auto-078761" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:10.791174  498106 pod_ready.go:83] waiting for pod "kube-proxy-x4869" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:11.191802  498106 pod_ready.go:94] pod "kube-proxy-x4869" is "Ready"
	I1016 19:47:11.191830  498106 pod_ready.go:86] duration metric: took 400.626233ms for pod "kube-proxy-x4869" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:11.392233  498106 pod_ready.go:83] waiting for pod "kube-scheduler-auto-078761" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:11.791492  498106 pod_ready.go:94] pod "kube-scheduler-auto-078761" is "Ready"
	I1016 19:47:11.791522  498106 pod_ready.go:86] duration metric: took 399.258498ms for pod "kube-scheduler-auto-078761" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:11.791535  498106 pod_ready.go:40] duration metric: took 1.603988703s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:47:11.853304  498106 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1016 19:47:11.857701  498106 out.go:179] * Done! kubectl is now configured to use "auto-078761" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.37991573Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f17da9c2-778b-497b-b7a1-c2f1b9c96f0c name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.381253404Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2bf31473-64a4-452b-a448-9ed5d4b54083 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.382449988Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95/dashboard-metrics-scraper" id=c1ef081c-c943-4365-84c6-289e06a08fbd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.382659517Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.39236141Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.392936219Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.410531959Z" level=info msg="Created container b93060d3e7af49e77f76a0c238af703a0b5bd02650bbb1ff9d0a84489b5d595b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95/dashboard-metrics-scraper" id=c1ef081c-c943-4365-84c6-289e06a08fbd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.411199545Z" level=info msg="Starting container: b93060d3e7af49e77f76a0c238af703a0b5bd02650bbb1ff9d0a84489b5d595b" id=de023940-21df-459b-9198-2f04421fbefb name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.41380436Z" level=info msg="Started container" PID=1655 containerID=b93060d3e7af49e77f76a0c238af703a0b5bd02650bbb1ff9d0a84489b5d595b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95/dashboard-metrics-scraper id=de023940-21df-459b-9198-2f04421fbefb name=/runtime.v1.RuntimeService/StartContainer sandboxID=e0b8ecc71c6a44573e6f52a473f250d83dfcc3bceb81c53a76142df88153c068
	Oct 16 19:47:04 default-k8s-diff-port-850436 conmon[1653]: conmon b93060d3e7af49e77f76 <ninfo>: container 1655 exited with status 1
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.615732487Z" level=info msg="Removing container: 41c94ff30cc6a92048cafc2b15b9e7a44d4976b0fa3753e2677fc635b07a0be9" id=48e07b46-8431-49af-919e-5940d49a1908 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.627980586Z" level=info msg="Error loading conmon cgroup of container 41c94ff30cc6a92048cafc2b15b9e7a44d4976b0fa3753e2677fc635b07a0be9: cgroup deleted" id=48e07b46-8431-49af-919e-5940d49a1908 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.631925569Z" level=info msg="Removed container 41c94ff30cc6a92048cafc2b15b9e7a44d4976b0fa3753e2677fc635b07a0be9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95/dashboard-metrics-scraper" id=48e07b46-8431-49af-919e-5940d49a1908 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.430129067Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.439745626Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.439909091Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.440031833Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.449845826Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.449886778Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.449908522Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.454612899Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.454770447Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.454853665Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.458299762Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.45833689Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	b93060d3e7af4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago       Exited              dashboard-metrics-scraper   2                   e0b8ecc71c6a4       dashboard-metrics-scraper-6ffb444bf9-kqn95             kubernetes-dashboard
	a3d43c810802a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago       Running             storage-provisioner         2                   e3b00175d186c       storage-provisioner                                    kube-system
	6c01290858405       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago       Running             kubernetes-dashboard        0                   550f48919508f       kubernetes-dashboard-855c9754f9-ng9x9                  kubernetes-dashboard
	281e10eedb7a1       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago       Running             busybox                     1                   5142691233b06       busybox                                                default
	b940324d46261       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago       Running             kube-proxy                  1                   9a9f26f0c31d5       kube-proxy-2l5ck                                       kube-system
	8fbc3ea61b484       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago       Running             kindnet-cni                 1                   c365c4d49d6ff       kindnet-x85fg                                          kube-system
	eef613b6fb796       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago       Exited              storage-provisioner         1                   e3b00175d186c       storage-provisioner                                    kube-system
	188edef414d15       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago       Running             coredns                     1                   48b5c77716ea3       coredns-66bc5c9577-vnm65                               kube-system
	2921ea52af99a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   b6db1d49fcf21       kube-apiserver-default-k8s-diff-port-850436            kube-system
	f415a5edf62f2       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   8a44e814225a5       kube-controller-manager-default-k8s-diff-port-850436   kube-system
	7a3b24f9c4c6a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   8fdca4c99c05c       kube-scheduler-default-k8s-diff-port-850436            kube-system
	a3f7185e8b7d3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   a86b362171ea4       etcd-default-k8s-diff-port-850436                      kube-system
	
	
	==> coredns [188edef414d15f9fcd0a85fa49e7243fbf77dab45649e305a2e60a979dedd27f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60221 - 7660 "HINFO IN 1079641673264027012.4700828975717732101. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013758366s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-850436
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-850436
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=default-k8s-diff-port-850436
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T19_44_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 19:44:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-850436
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:47:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:47:04 +0000   Thu, 16 Oct 2025 19:44:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:47:04 +0000   Thu, 16 Oct 2025 19:44:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:47:04 +0000   Thu, 16 Oct 2025 19:44:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 19:47:04 +0000   Thu, 16 Oct 2025 19:45:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-850436
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                9d720de3-5d7a-422c-aff9-73121cba7d50
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-vnm65                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-default-k8s-diff-port-850436                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m26s
	  kube-system                 kindnet-x85fg                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-default-k8s-diff-port-850436             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-850436    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-2l5ck                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-default-k8s-diff-port-850436             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-kqn95              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ng9x9                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m19s                  kube-proxy       
	  Normal   Starting                 50s                    kube-proxy       
	  Warning  CgroupV1                 2m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m26s                  kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m26s                  kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m26s                  kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m26s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m21s                  node-controller  Node default-k8s-diff-port-850436 event: Registered Node default-k8s-diff-port-850436 in Controller
	  Normal   NodeReady                99s                    kubelet          Node default-k8s-diff-port-850436 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                    node-controller  Node default-k8s-diff-port-850436 event: Registered Node default-k8s-diff-port-850436 in Controller
	
	
	==> dmesg <==
	[Oct16 19:23] overlayfs: idmapped layers are currently not supported
	[ +28.397927] overlayfs: idmapped layers are currently not supported
	[Oct16 19:24] overlayfs: idmapped layers are currently not supported
	[ +25.533019] overlayfs: idmapped layers are currently not supported
	[Oct16 19:26] overlayfs: idmapped layers are currently not supported
	[Oct16 19:27] overlayfs: idmapped layers are currently not supported
	[Oct16 19:29] overlayfs: idmapped layers are currently not supported
	[Oct16 19:31] overlayfs: idmapped layers are currently not supported
	[Oct16 19:32] overlayfs: idmapped layers are currently not supported
	[Oct16 19:34] overlayfs: idmapped layers are currently not supported
	[Oct16 19:36] overlayfs: idmapped layers are currently not supported
	[Oct16 19:37] overlayfs: idmapped layers are currently not supported
	[  +8.490329] overlayfs: idmapped layers are currently not supported
	[Oct16 19:38] overlayfs: idmapped layers are currently not supported
	[Oct16 19:39] overlayfs: idmapped layers are currently not supported
	[Oct16 19:40] overlayfs: idmapped layers are currently not supported
	[Oct16 19:41] overlayfs: idmapped layers are currently not supported
	[ +20.605853] overlayfs: idmapped layers are currently not supported
	[Oct16 19:43] overlayfs: idmapped layers are currently not supported
	[ +20.110477] overlayfs: idmapped layers are currently not supported
	[Oct16 19:44] overlayfs: idmapped layers are currently not supported
	[Oct16 19:45] overlayfs: idmapped layers are currently not supported
	[ +26.426905] overlayfs: idmapped layers are currently not supported
	[Oct16 19:46] overlayfs: idmapped layers are currently not supported
	[  +5.629854] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a3f7185e8b7d30b96feaff04a980ad8d52b0865f5c6a2ae6f3ecc05241267bce] <==
	{"level":"warn","ts":"2025-10-16T19:46:22.058284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.113781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.164806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.188797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.216172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.246634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.266471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.294033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.336886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.444271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.465002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.520679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.568130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.596561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.617544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.641632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.664795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.693041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.754376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.769820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.799545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.833679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.853514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.878524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.993092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59778","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:47:16 up  2:29,  0 user,  load average: 2.92, 3.45, 3.05
	Linux default-k8s-diff-port-850436 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8fbc3ea61b4840cffb138149604309a06a200993c1f68934c9f28f84215f43ca] <==
	I1016 19:46:25.264486       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 19:46:25.281696       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1016 19:46:25.281840       1 main.go:148] setting mtu 1500 for CNI 
	I1016 19:46:25.281852       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 19:46:25.281876       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T19:46:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 19:46:25.429169       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 19:46:25.429197       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 19:46:25.429205       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 19:46:25.429830       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1016 19:46:55.429719       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1016 19:46:55.429743       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1016 19:46:55.429799       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1016 19:46:55.429839       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1016 19:46:56.929280       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 19:46:56.929315       1 metrics.go:72] Registering metrics
	I1016 19:46:56.929388       1 controller.go:711] "Syncing nftables rules"
	I1016 19:47:05.429463       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:47:05.429510       1 main.go:301] handling current node
	I1016 19:47:15.429505       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:47:15.429553       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2921ea52af99aa969071fb411fb52ba0f384fcc606004df4ff328bb7b0e640a5] <==
	I1016 19:46:24.100577       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1016 19:46:24.100611       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1016 19:46:24.101623       1 aggregator.go:171] initial CRD sync complete...
	I1016 19:46:24.101652       1 autoregister_controller.go:144] Starting autoregister controller
	I1016 19:46:24.101660       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 19:46:24.101665       1 cache.go:39] Caches are synced for autoregister controller
	I1016 19:46:24.115842       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 19:46:24.116081       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1016 19:46:24.116129       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1016 19:46:24.133377       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1016 19:46:24.155046       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 19:46:24.245597       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1016 19:46:24.249893       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1016 19:46:24.318774       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1016 19:46:24.341698       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1016 19:46:24.760818       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 19:46:25.560843       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 19:46:25.715863       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 19:46:25.756760       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 19:46:25.774402       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 19:46:25.884945       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.46.26"}
	I1016 19:46:25.910460       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.167.119"}
	I1016 19:46:28.736835       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 19:46:28.784325       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 19:46:28.886027       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [f415a5edf62f2fed33a35088647cc0f9936a583cf2985d885edf35900733bab2] <==
	I1016 19:46:28.413332       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:46:28.427593       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1016 19:46:28.430719       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1016 19:46:28.430830       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1016 19:46:28.430898       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1016 19:46:28.434435       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1016 19:46:28.434515       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1016 19:46:28.437220       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1016 19:46:28.439922       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:46:28.442974       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1016 19:46:28.443029       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 19:46:28.448022       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1016 19:46:28.448131       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1016 19:46:28.448383       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 19:46:28.454692       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1016 19:46:28.454779       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1016 19:46:28.454951       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1016 19:46:28.455029       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-850436"
	I1016 19:46:28.455070       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1016 19:46:28.457565       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 19:46:28.459144       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1016 19:46:28.474928       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:46:28.478888       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:46:28.478914       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 19:46:28.478922       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [b940324d4626152d5c0c25dda09100a15cd59317900c9d608332a078d8a55714] <==
	I1016 19:46:25.537248       1 server_linux.go:53] "Using iptables proxy"
	I1016 19:46:25.654994       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 19:46:25.757275       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 19:46:25.758994       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1016 19:46:25.778614       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 19:46:26.089019       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 19:46:26.089157       1 server_linux.go:132] "Using iptables Proxier"
	I1016 19:46:26.093837       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 19:46:26.094228       1 server.go:527] "Version info" version="v1.34.1"
	I1016 19:46:26.094428       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:46:26.095712       1 config.go:200] "Starting service config controller"
	I1016 19:46:26.095768       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 19:46:26.095815       1 config.go:106] "Starting endpoint slice config controller"
	I1016 19:46:26.095842       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 19:46:26.095881       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 19:46:26.095906       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 19:46:26.100014       1 config.go:309] "Starting node config controller"
	I1016 19:46:26.100110       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 19:46:26.100143       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 19:46:26.199719       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 19:46:26.199909       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1016 19:46:26.200198       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7a3b24f9c4c6aafecdde8d6b650ec0da77e3d7b5505503d38459f34464dc2a07] <==
	I1016 19:46:20.514708       1 serving.go:386] Generated self-signed cert in-memory
	W1016 19:46:24.102479       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1016 19:46:24.102570       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1016 19:46:24.102610       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1016 19:46:24.102640       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1016 19:46:24.213066       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1016 19:46:24.213095       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:46:24.225260       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 19:46:24.250063       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 19:46:24.250590       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:46:24.250604       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:46:24.350659       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 19:46:29 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:29.015100     776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss7rp\" (UniqueName: \"kubernetes.io/projected/a086c25f-aa8c-4925-b778-32f4312b58da-kube-api-access-ss7rp\") pod \"kubernetes-dashboard-855c9754f9-ng9x9\" (UID: \"a086c25f-aa8c-4925-b778-32f4312b58da\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ng9x9"
	Oct 16 19:46:29 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:29.015339     776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/da221b3d-582e-4d7e-9190-ad4205d7a0e1-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-kqn95\" (UID: \"da221b3d-582e-4d7e-9190-ad4205d7a0e1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95"
	Oct 16 19:46:29 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:29.015476     776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q7bf\" (UniqueName: \"kubernetes.io/projected/da221b3d-582e-4d7e-9190-ad4205d7a0e1-kube-api-access-8q7bf\") pod \"dashboard-metrics-scraper-6ffb444bf9-kqn95\" (UID: \"da221b3d-582e-4d7e-9190-ad4205d7a0e1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95"
	Oct 16 19:46:29 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:29.146104     776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 16 19:46:29 default-k8s-diff-port-850436 kubelet[776]: W1016 19:46:29.312458     776 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a/crio-550f48919508f8071f3a41f22973f9e76b8bd6155ff072b93ae050dc7e9c7779 WatchSource:0}: Error finding container 550f48919508f8071f3a41f22973f9e76b8bd6155ff072b93ae050dc7e9c7779: Status 404 returned error can't find the container with id 550f48919508f8071f3a41f22973f9e76b8bd6155ff072b93ae050dc7e9c7779
	Oct 16 19:46:29 default-k8s-diff-port-850436 kubelet[776]: W1016 19:46:29.331832     776 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a/crio-e0b8ecc71c6a44573e6f52a473f250d83dfcc3bceb81c53a76142df88153c068 WatchSource:0}: Error finding container e0b8ecc71c6a44573e6f52a473f250d83dfcc3bceb81c53a76142df88153c068: Status 404 returned error can't find the container with id e0b8ecc71c6a44573e6f52a473f250d83dfcc3bceb81c53a76142df88153c068
	Oct 16 19:46:39 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:39.543553     776 scope.go:117] "RemoveContainer" containerID="20c26f4be03399765ded8c75bbb6f522ed219d8c4f33be833024e86efa6d2bce"
	Oct 16 19:46:39 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:39.572125     776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ng9x9" podStartSLOduration=6.264343898 podStartE2EDuration="11.572106083s" podCreationTimestamp="2025-10-16 19:46:28 +0000 UTC" firstStartedPulling="2025-10-16 19:46:29.320138941 +0000 UTC m=+14.402314060" lastFinishedPulling="2025-10-16 19:46:34.627901126 +0000 UTC m=+19.710076245" observedRunningTime="2025-10-16 19:46:35.548435858 +0000 UTC m=+20.630610985" watchObservedRunningTime="2025-10-16 19:46:39.572106083 +0000 UTC m=+24.654281202"
	Oct 16 19:46:40 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:40.548392     776 scope.go:117] "RemoveContainer" containerID="20c26f4be03399765ded8c75bbb6f522ed219d8c4f33be833024e86efa6d2bce"
	Oct 16 19:46:40 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:40.548742     776 scope.go:117] "RemoveContainer" containerID="41c94ff30cc6a92048cafc2b15b9e7a44d4976b0fa3753e2677fc635b07a0be9"
	Oct 16 19:46:40 default-k8s-diff-port-850436 kubelet[776]: E1016 19:46:40.548896     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqn95_kubernetes-dashboard(da221b3d-582e-4d7e-9190-ad4205d7a0e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95" podUID="da221b3d-582e-4d7e-9190-ad4205d7a0e1"
	Oct 16 19:46:41 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:41.552439     776 scope.go:117] "RemoveContainer" containerID="41c94ff30cc6a92048cafc2b15b9e7a44d4976b0fa3753e2677fc635b07a0be9"
	Oct 16 19:46:41 default-k8s-diff-port-850436 kubelet[776]: E1016 19:46:41.552609     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqn95_kubernetes-dashboard(da221b3d-582e-4d7e-9190-ad4205d7a0e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95" podUID="da221b3d-582e-4d7e-9190-ad4205d7a0e1"
	Oct 16 19:46:49 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:49.295055     776 scope.go:117] "RemoveContainer" containerID="41c94ff30cc6a92048cafc2b15b9e7a44d4976b0fa3753e2677fc635b07a0be9"
	Oct 16 19:46:49 default-k8s-diff-port-850436 kubelet[776]: E1016 19:46:49.295251     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqn95_kubernetes-dashboard(da221b3d-582e-4d7e-9190-ad4205d7a0e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95" podUID="da221b3d-582e-4d7e-9190-ad4205d7a0e1"
	Oct 16 19:46:55 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:55.587875     776 scope.go:117] "RemoveContainer" containerID="eef613b6fb796e0cad4b501a6d0821685cd8a7c54283320e4f50f4d158511a2a"
	Oct 16 19:47:04 default-k8s-diff-port-850436 kubelet[776]: I1016 19:47:04.379410     776 scope.go:117] "RemoveContainer" containerID="41c94ff30cc6a92048cafc2b15b9e7a44d4976b0fa3753e2677fc635b07a0be9"
	Oct 16 19:47:04 default-k8s-diff-port-850436 kubelet[776]: I1016 19:47:04.613920     776 scope.go:117] "RemoveContainer" containerID="41c94ff30cc6a92048cafc2b15b9e7a44d4976b0fa3753e2677fc635b07a0be9"
	Oct 16 19:47:04 default-k8s-diff-port-850436 kubelet[776]: I1016 19:47:04.614153     776 scope.go:117] "RemoveContainer" containerID="b93060d3e7af49e77f76a0c238af703a0b5bd02650bbb1ff9d0a84489b5d595b"
	Oct 16 19:47:04 default-k8s-diff-port-850436 kubelet[776]: E1016 19:47:04.614322     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqn95_kubernetes-dashboard(da221b3d-582e-4d7e-9190-ad4205d7a0e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95" podUID="da221b3d-582e-4d7e-9190-ad4205d7a0e1"
	Oct 16 19:47:09 default-k8s-diff-port-850436 kubelet[776]: I1016 19:47:09.295239     776 scope.go:117] "RemoveContainer" containerID="b93060d3e7af49e77f76a0c238af703a0b5bd02650bbb1ff9d0a84489b5d595b"
	Oct 16 19:47:09 default-k8s-diff-port-850436 kubelet[776]: E1016 19:47:09.295908     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqn95_kubernetes-dashboard(da221b3d-582e-4d7e-9190-ad4205d7a0e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95" podUID="da221b3d-582e-4d7e-9190-ad4205d7a0e1"
	Oct 16 19:47:13 default-k8s-diff-port-850436 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 19:47:13 default-k8s-diff-port-850436 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 19:47:13 default-k8s-diff-port-850436 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [6c012908584051b30602aa87822b512b418fdd18370e18b61ac73fdae4230834] <==
	2025/10/16 19:46:34 Starting overwatch
	2025/10/16 19:46:34 Using namespace: kubernetes-dashboard
	2025/10/16 19:46:34 Using in-cluster config to connect to apiserver
	2025/10/16 19:46:34 Using secret token for csrf signing
	2025/10/16 19:46:34 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/16 19:46:34 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/16 19:46:34 Successful initial request to the apiserver, version: v1.34.1
	2025/10/16 19:46:34 Generating JWE encryption key
	2025/10/16 19:46:34 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/16 19:46:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/16 19:46:35 Initializing JWE encryption key from synchronized object
	2025/10/16 19:46:35 Creating in-cluster Sidecar client
	2025/10/16 19:46:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 19:46:35 Serving insecurely on HTTP port: 9090
	2025/10/16 19:47:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [a3d43c810802a012980bb607f3ee226f9b47a963fa02f3bd528833fe420201ba] <==
	I1016 19:46:55.649534       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 19:46:55.662656       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 19:46:55.662725       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1016 19:46:55.664895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:46:59.120680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:47:03.382074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:47:06.979868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:47:10.033564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:47:13.056911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:47:13.063960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 19:47:13.064150       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 19:47:13.064771       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"371d6569-f6ea-4eb0-a7cb-5543888dcf96", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-850436_14a473fb-f1e6-4fb0-a565-f2a20a7ccffa became leader
	I1016 19:47:13.064938       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-850436_14a473fb-f1e6-4fb0-a565-f2a20a7ccffa!
	W1016 19:47:13.098771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:47:13.102626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 19:47:13.165303       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-850436_14a473fb-f1e6-4fb0-a565-f2a20a7ccffa!
	W1016 19:47:15.105900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:47:15.123852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [eef613b6fb796e0cad4b501a6d0821685cd8a7c54283320e4f50f4d158511a2a] <==
	I1016 19:46:25.053889       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1016 19:46:55.059845       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-850436 -n default-k8s-diff-port-850436
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-850436 -n default-k8s-diff-port-850436: exit status 2 (385.216528ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-850436 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-850436
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-850436:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a",
	        "Created": "2025-10-16T19:44:20.385325839Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 500853,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T19:46:06.37370582Z",
	            "FinishedAt": "2025-10-16T19:46:05.37115607Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a/hostname",
	        "HostsPath": "/var/lib/docker/containers/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a/hosts",
	        "LogPath": "/var/lib/docker/containers/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a-json.log",
	        "Name": "/default-k8s-diff-port-850436",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-850436:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-850436",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a",
	                "LowerDir": "/var/lib/docker/overlay2/704a7d346d8fb60187e66a824bc70cd63e48122ca5c9005a5543db75cf0cedf3-init/diff:/var/lib/docker/overlay2/4a22ef20958f1d0aba10970ef7ed09dc5ca9d2479766a33211eb557ebfa3166b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/704a7d346d8fb60187e66a824bc70cd63e48122ca5c9005a5543db75cf0cedf3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/704a7d346d8fb60187e66a824bc70cd63e48122ca5c9005a5543db75cf0cedf3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/704a7d346d8fb60187e66a824bc70cd63e48122ca5c9005a5543db75cf0cedf3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-850436",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-850436/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-850436",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-850436",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-850436",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5767ea9451eb2eb1a968ca80105b894f8f4635ab08eb1ab992015d5a0c86f68a",
	            "SandboxKey": "/var/run/docker/netns/5767ea9451eb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-850436": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:f2:a8:96:1e:7d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "12c5ab8893cdac2531939d281a38b055f53ba9453adc3d59ffb5147c0257d0fe",
	                    "EndpointID": "51da62dbf992f5fb8c56e483b09a48bcad10067ff840cef2fc4060d2ea95d292",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-850436",
	                        "4aa7104008e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-850436 -n default-k8s-diff-port-850436
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-850436 -n default-k8s-diff-port-850436: exit status 2 (367.830668ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-850436 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-850436 logs -n 25: (1.298453591s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-225696                                                                                                                                                                                                                          │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p no-preload-225696                                                                                                                                                                                                                          │ no-preload-225696            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p disable-driver-mounts-031282                                                                                                                                                                                                               │ disable-driver-mounts-031282 │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ start   │ -p default-k8s-diff-port-850436 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:45 UTC │
	│ image   │ embed-certs-751669 image list --format=json                                                                                                                                                                                                   │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ pause   │ -p embed-certs-751669 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │                     │
	│ delete  │ -p embed-certs-751669                                                                                                                                                                                                                         │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ delete  │ -p embed-certs-751669                                                                                                                                                                                                                         │ embed-certs-751669           │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:44 UTC │
	│ start   │ -p newest-cni-408495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:44 UTC │ 16 Oct 25 19:45 UTC │
	│ addons  │ enable metrics-server -p newest-cni-408495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │                     │
	│ stop    │ -p newest-cni-408495 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ addons  │ enable dashboard -p newest-cni-408495 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ start   │ -p newest-cni-408495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ image   │ newest-cni-408495 image list --format=json                                                                                                                                                                                                    │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ pause   │ -p newest-cni-408495 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │                     │
	│ delete  │ -p newest-cni-408495                                                                                                                                                                                                                          │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ delete  │ -p newest-cni-408495                                                                                                                                                                                                                          │ newest-cni-408495            │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:45 UTC │
	│ start   │ -p auto-078761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-078761                  │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:47 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-850436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-850436 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:45 UTC │ 16 Oct 25 19:46 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-850436 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:46 UTC │ 16 Oct 25 19:46 UTC │
	│ start   │ -p default-k8s-diff-port-850436 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:46 UTC │ 16 Oct 25 19:47 UTC │
	│ ssh     │ -p auto-078761 pgrep -a kubelet                                                                                                                                                                                                               │ auto-078761                  │ jenkins │ v1.37.0 │ 16 Oct 25 19:47 UTC │ 16 Oct 25 19:47 UTC │
	│ image   │ default-k8s-diff-port-850436 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:47 UTC │ 16 Oct 25 19:47 UTC │
	│ pause   │ -p default-k8s-diff-port-850436 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-850436 │ jenkins │ v1.37.0 │ 16 Oct 25 19:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 19:46:05
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 19:46:05.994198  500720 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:46:05.994721  500720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:46:05.994753  500720 out.go:374] Setting ErrFile to fd 2...
	I1016 19:46:05.994772  500720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:46:05.995061  500720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:46:05.995473  500720 out.go:368] Setting JSON to false
	I1016 19:46:05.996421  500720 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8895,"bootTime":1760635071,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 19:46:05.996513  500720 start.go:141] virtualization:  
	I1016 19:46:06.001381  500720 out.go:179] * [default-k8s-diff-port-850436] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 19:46:06.004494  500720 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 19:46:06.004555  500720 notify.go:220] Checking for updates...
	I1016 19:46:06.012093  500720 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 19:46:06.015246  500720 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:46:06.018158  500720 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 19:46:06.021112  500720 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 19:46:06.024120  500720 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 19:46:06.027585  500720 config.go:182] Loaded profile config "default-k8s-diff-port-850436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:46:06.028147  500720 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 19:46:06.061048  500720 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 19:46:06.061187  500720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:46:06.170912  500720 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-16 19:46:06.144094854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:46:06.171028  500720 docker.go:318] overlay module found
	I1016 19:46:06.174176  500720 out.go:179] * Using the docker driver based on existing profile
	I1016 19:46:06.177043  500720 start.go:305] selected driver: docker
	I1016 19:46:06.177062  500720 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-850436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-850436 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:46:06.177190  500720 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 19:46:06.177936  500720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:46:06.263017  500720 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-16 19:46:06.247993485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:46:06.263351  500720 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:46:06.263386  500720 cni.go:84] Creating CNI manager for ""
	I1016 19:46:06.263447  500720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:46:06.263489  500720 start.go:349] cluster config:
	{Name:default-k8s-diff-port-850436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-850436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:46:06.266882  500720 out.go:179] * Starting "default-k8s-diff-port-850436" primary control-plane node in "default-k8s-diff-port-850436" cluster
	I1016 19:46:06.269884  500720 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 19:46:06.272835  500720 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 19:46:06.275705  500720 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:46:06.275777  500720 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 19:46:06.275788  500720 cache.go:58] Caching tarball of preloaded images
	I1016 19:46:06.275872  500720 preload.go:233] Found /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1016 19:46:06.275881  500720 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 19:46:06.275988  500720 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/config.json ...
	I1016 19:46:06.276202  500720 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 19:46:06.297958  500720 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 19:46:06.297978  500720 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 19:46:06.297998  500720 cache.go:232] Successfully downloaded all kic artifacts
	I1016 19:46:06.298021  500720 start.go:360] acquireMachinesLock for default-k8s-diff-port-850436: {Name:mk7e6cd57751a3c09c0a04e7fccd20808ff22477 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 19:46:06.298073  500720 start.go:364] duration metric: took 35.816µs to acquireMachinesLock for "default-k8s-diff-port-850436"
	I1016 19:46:06.298092  500720 start.go:96] Skipping create...Using existing machine configuration
	I1016 19:46:06.298098  500720 fix.go:54] fixHost starting: 
	I1016 19:46:06.298356  500720 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:46:06.330530  500720 fix.go:112] recreateIfNeeded on default-k8s-diff-port-850436: state=Stopped err=<nil>
	W1016 19:46:06.330556  500720 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 19:46:05.244987  498106 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 19:46:05.245321  498106 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 19:46:05.780075  498106 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 19:46:06.244756  498106 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 19:46:08.105390  498106 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 19:46:08.727733  498106 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 19:46:09.135343  498106 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 19:46:09.136231  498106 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 19:46:09.139022  498106 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 19:46:09.142842  498106 out.go:252]   - Booting up control plane ...
	I1016 19:46:09.142949  498106 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 19:46:09.143030  498106 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 19:46:09.143101  498106 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 19:46:09.161878  498106 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 19:46:09.162127  498106 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 19:46:09.170657  498106 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 19:46:09.171160  498106 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 19:46:09.171231  498106 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 19:46:09.305076  498106 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 19:46:09.305266  498106 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 19:46:09.819168  498106 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 511.110613ms
	I1016 19:46:09.820266  498106 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 19:46:09.820998  498106 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1016 19:46:09.821335  498106 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 19:46:09.822192  498106 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 19:46:06.333663  500720 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-850436" ...
	I1016 19:46:06.333757  500720 cli_runner.go:164] Run: docker start default-k8s-diff-port-850436
	I1016 19:46:06.646977  500720 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:46:06.669604  500720 kic.go:430] container "default-k8s-diff-port-850436" state is running.
	I1016 19:46:06.670224  500720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-850436
	I1016 19:46:06.702833  500720 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/config.json ...
	I1016 19:46:06.703064  500720 machine.go:93] provisionDockerMachine start ...
	I1016 19:46:06.703129  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:06.738325  500720 main.go:141] libmachine: Using SSH client type: native
	I1016 19:46:06.738641  500720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1016 19:46:06.738659  500720 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 19:46:06.739830  500720 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1016 19:46:09.908959  500720 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-850436
	
	I1016 19:46:09.908996  500720 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-850436"
	I1016 19:46:09.909083  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:09.928439  500720 main.go:141] libmachine: Using SSH client type: native
	I1016 19:46:09.928745  500720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1016 19:46:09.928764  500720 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-850436 && echo "default-k8s-diff-port-850436" | sudo tee /etc/hostname
	I1016 19:46:10.091473  500720 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-850436
	
	I1016 19:46:10.091570  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:10.110901  500720 main.go:141] libmachine: Using SSH client type: native
	I1016 19:46:10.111224  500720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1016 19:46:10.111247  500720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-850436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-850436/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-850436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 19:46:10.271166  500720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 19:46:10.271220  500720 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-288457/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-288457/.minikube}
	I1016 19:46:10.271243  500720 ubuntu.go:190] setting up certificates
	I1016 19:46:10.271252  500720 provision.go:84] configureAuth start
	I1016 19:46:10.271316  500720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-850436
	I1016 19:46:10.301446  500720 provision.go:143] copyHostCerts
	I1016 19:46:10.301504  500720 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem, removing ...
	I1016 19:46:10.301521  500720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem
	I1016 19:46:10.301574  500720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/ca.pem (1082 bytes)
	I1016 19:46:10.301657  500720 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem, removing ...
	I1016 19:46:10.301662  500720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem
	I1016 19:46:10.301685  500720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/cert.pem (1123 bytes)
	I1016 19:46:10.301738  500720 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem, removing ...
	I1016 19:46:10.301743  500720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem
	I1016 19:46:10.301765  500720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-288457/.minikube/key.pem (1679 bytes)
	I1016 19:46:10.301809  500720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-850436 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-850436 localhost minikube]
	I1016 19:46:10.906745  500720 provision.go:177] copyRemoteCerts
	I1016 19:46:10.906817  500720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 19:46:10.906868  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:10.924623  500720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:46:11.035414  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 19:46:11.062524  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1016 19:46:11.092793  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1016 19:46:11.128753  500720 provision.go:87] duration metric: took 857.473855ms to configureAuth
	I1016 19:46:11.128781  500720 ubuntu.go:206] setting minikube options for container-runtime
	I1016 19:46:11.128996  500720 config.go:182] Loaded profile config "default-k8s-diff-port-850436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:46:11.129114  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:11.157919  500720 main.go:141] libmachine: Using SSH client type: native
	I1016 19:46:11.158248  500720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1016 19:46:11.158271  500720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 19:46:11.578726  500720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 19:46:11.578805  500720 machine.go:96] duration metric: took 4.875718363s to provisionDockerMachine
	I1016 19:46:11.578831  500720 start.go:293] postStartSetup for "default-k8s-diff-port-850436" (driver="docker")
	I1016 19:46:11.578873  500720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 19:46:11.578971  500720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 19:46:11.579050  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:11.602079  500720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:46:11.731070  500720 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 19:46:11.734861  500720 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 19:46:11.734892  500720 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 19:46:11.734904  500720 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/addons for local assets ...
	I1016 19:46:11.734967  500720 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-288457/.minikube/files for local assets ...
	I1016 19:46:11.735055  500720 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem -> 2903122.pem in /etc/ssl/certs
	I1016 19:46:11.735158  500720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 19:46:11.749745  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:46:11.779701  500720 start.go:296] duration metric: took 200.839355ms for postStartSetup
	I1016 19:46:11.779785  500720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 19:46:11.779849  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:11.806758  500720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:46:11.934167  500720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 19:46:11.943485  500720 fix.go:56] duration metric: took 5.645380335s for fixHost
	I1016 19:46:11.943510  500720 start.go:83] releasing machines lock for "default-k8s-diff-port-850436", held for 5.64542836s
	I1016 19:46:11.943592  500720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-850436
	I1016 19:46:11.977077  500720 ssh_runner.go:195] Run: cat /version.json
	I1016 19:46:11.977128  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:11.977401  500720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 19:46:11.977448  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:12.025394  500720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:46:12.027338  500720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:46:12.167135  500720 ssh_runner.go:195] Run: systemctl --version
	I1016 19:46:12.317939  500720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 19:46:12.403702  500720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 19:46:12.408788  500720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 19:46:12.408951  500720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 19:46:12.422752  500720 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 19:46:12.422826  500720 start.go:495] detecting cgroup driver to use...
	I1016 19:46:12.422921  500720 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1016 19:46:12.423010  500720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 19:46:12.446091  500720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 19:46:12.467772  500720 docker.go:218] disabling cri-docker service (if available) ...
	I1016 19:46:12.467895  500720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 19:46:12.495198  500720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 19:46:12.523605  500720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 19:46:12.724657  500720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 19:46:12.938886  500720 docker.go:234] disabling docker service ...
	I1016 19:46:12.939036  500720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 19:46:12.955001  500720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 19:46:12.983308  500720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 19:46:13.194958  500720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 19:46:13.410359  500720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 19:46:13.435531  500720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 19:46:13.460543  500720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 19:46:13.460612  500720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:46:13.477764  500720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1016 19:46:13.477836  500720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:46:13.505844  500720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:46:13.531036  500720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:46:13.542709  500720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 19:46:13.562501  500720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:46:13.584765  500720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:46:13.600593  500720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 19:46:13.619721  500720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 19:46:13.640385  500720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 19:46:13.655556  500720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:46:13.870522  500720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 19:46:14.065487  500720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 19:46:14.065650  500720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 19:46:14.077769  500720 start.go:563] Will wait 60s for crictl version
	I1016 19:46:14.077892  500720 ssh_runner.go:195] Run: which crictl
	I1016 19:46:14.085734  500720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 19:46:14.146611  500720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 19:46:14.146770  500720 ssh_runner.go:195] Run: crio --version
	I1016 19:46:14.191994  500720 ssh_runner.go:195] Run: crio --version
	I1016 19:46:14.254805  500720 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 19:46:14.257868  500720 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-850436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 19:46:14.285452  500720 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1016 19:46:14.289521  500720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:46:14.309558  500720 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-850436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-850436 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 19:46:14.309666  500720 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 19:46:14.309717  500720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:46:14.363542  500720 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:46:14.363561  500720 crio.go:433] Images already preloaded, skipping extraction
	I1016 19:46:14.363616  500720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 19:46:14.422709  500720 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 19:46:14.422728  500720 cache_images.go:85] Images are preloaded, skipping loading
	I1016 19:46:14.422735  500720 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1016 19:46:14.422833  500720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-850436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-850436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 19:46:14.422906  500720 ssh_runner.go:195] Run: crio config
	I1016 19:46:14.574993  500720 cni.go:84] Creating CNI manager for ""
	I1016 19:46:14.575067  500720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:46:14.575103  500720 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 19:46:14.575164  500720 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-850436 NodeName:default-k8s-diff-port-850436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 19:46:14.575353  500720 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-850436"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 19:46:14.575482  500720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 19:46:14.583397  500720 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 19:46:14.583516  500720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 19:46:14.599185  500720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1016 19:46:14.620673  500720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 19:46:14.639571  500720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1016 19:46:14.659004  500720 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1016 19:46:14.663132  500720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 19:46:14.677721  500720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:46:14.896848  500720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:46:14.931614  500720 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436 for IP: 192.168.76.2
	I1016 19:46:14.931685  500720 certs.go:195] generating shared ca certs ...
	I1016 19:46:14.931716  500720 certs.go:227] acquiring lock for ca certs: {Name:mk62df25a6046aecef857f89f63b12be32b4fcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:46:14.931888  500720 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key
	I1016 19:46:14.931963  500720 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key
	I1016 19:46:14.931986  500720 certs.go:257] generating profile certs ...
	I1016 19:46:14.932135  500720 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/client.key
	I1016 19:46:14.932266  500720 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/apiserver.key.1d408be1
	I1016 19:46:14.932356  500720 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/proxy-client.key
	I1016 19:46:14.932516  500720 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem (1338 bytes)
	W1016 19:46:14.932580  500720 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312_empty.pem, impossibly tiny 0 bytes
	I1016 19:46:14.932606  500720 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca-key.pem (1675 bytes)
	I1016 19:46:14.932670  500720 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/ca.pem (1082 bytes)
	I1016 19:46:14.932735  500720 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/cert.pem (1123 bytes)
	I1016 19:46:14.932798  500720 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/certs/key.pem (1679 bytes)
	I1016 19:46:14.932889  500720 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem (1708 bytes)
	I1016 19:46:14.933703  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 19:46:14.969004  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1016 19:46:15.008355  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 19:46:15.036614  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1016 19:46:15.075054  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1016 19:46:15.115131  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 19:46:15.163275  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 19:46:15.202312  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 19:46:15.253927  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/certs/290312.pem --> /usr/share/ca-certificates/290312.pem (1338 bytes)
	I1016 19:46:15.302353  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/ssl/certs/2903122.pem --> /usr/share/ca-certificates/2903122.pem (1708 bytes)
	I1016 19:46:15.365294  500720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 19:46:15.408332  500720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 19:46:15.431308  500720 ssh_runner.go:195] Run: openssl version
	I1016 19:46:15.446437  500720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 19:46:15.457806  500720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:46:15.461644  500720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:46:15.461767  500720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 19:46:15.540276  500720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 19:46:15.551771  500720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290312.pem && ln -fs /usr/share/ca-certificates/290312.pem /etc/ssl/certs/290312.pem"
	I1016 19:46:15.562888  500720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290312.pem
	I1016 19:46:15.567017  500720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 18:39 /usr/share/ca-certificates/290312.pem
	I1016 19:46:15.567143  500720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290312.pem
	I1016 19:46:15.609551  500720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290312.pem /etc/ssl/certs/51391683.0"
	I1016 19:46:15.623673  500720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903122.pem && ln -fs /usr/share/ca-certificates/2903122.pem /etc/ssl/certs/2903122.pem"
	I1016 19:46:15.637203  500720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903122.pem
	I1016 19:46:15.641464  500720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 18:39 /usr/share/ca-certificates/2903122.pem
	I1016 19:46:15.641581  500720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903122.pem
	I1016 19:46:15.706112  500720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2903122.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 19:46:15.718341  500720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 19:46:15.722803  500720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 19:46:15.782311  500720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 19:46:15.877470  500720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 19:46:15.971529  500720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 19:46:16.151141  500720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 19:46:16.232560  500720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 19:46:16.354984  500720 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-850436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-850436 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 19:46:16.355128  500720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 19:46:16.355221  500720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 19:46:16.489840  500720 cri.go:89] found id: "2921ea52af99aa969071fb411fb52ba0f384fcc606004df4ff328bb7b0e640a5"
	I1016 19:46:16.489912  500720 cri.go:89] found id: "f415a5edf62f2fed33a35088647cc0f9936a583cf2985d885edf35900733bab2"
	I1016 19:46:16.489948  500720 cri.go:89] found id: "7a3b24f9c4c6aafecdde8d6b650ec0da77e3d7b5505503d38459f34464dc2a07"
	I1016 19:46:16.489971  500720 cri.go:89] found id: "a3f7185e8b7d30b96feaff04a980ad8d52b0865f5c6a2ae6f3ecc05241267bce"
	I1016 19:46:16.489991  500720 cri.go:89] found id: ""
	I1016 19:46:16.490071  500720 ssh_runner.go:195] Run: sudo runc list -f json
	W1016 19:46:16.522250  500720 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T19:46:16Z" level=error msg="open /run/runc: no such file or directory"
	I1016 19:46:16.522409  500720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 19:46:16.546271  500720 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 19:46:16.546344  500720 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 19:46:16.546442  500720 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 19:46:16.570583  500720 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 19:46:16.571084  500720 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-850436" does not appear in /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:46:16.571249  500720 kubeconfig.go:62] /home/jenkins/minikube-integration/21738-288457/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-850436" cluster setting kubeconfig missing "default-k8s-diff-port-850436" context setting]
	I1016 19:46:16.571599  500720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:46:16.573363  500720 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 19:46:16.594520  500720 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1016 19:46:16.594600  500720 kubeadm.go:601] duration metric: took 48.235824ms to restartPrimaryControlPlane
	I1016 19:46:16.594622  500720 kubeadm.go:402] duration metric: took 239.648351ms to StartCluster
	I1016 19:46:16.594668  500720 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:46:16.594763  500720 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:46:16.595537  500720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:46:16.595802  500720 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:46:16.596174  500720 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 19:46:16.596252  500720 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-850436"
	I1016 19:46:16.596265  500720 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-850436"
	W1016 19:46:16.596272  500720 addons.go:247] addon storage-provisioner should already be in state true
	I1016 19:46:16.596292  500720 host.go:66] Checking if "default-k8s-diff-port-850436" exists ...
	I1016 19:46:16.596889  500720 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:46:16.597300  500720 config.go:182] Loaded profile config "default-k8s-diff-port-850436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:46:16.597408  500720 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-850436"
	I1016 19:46:16.597437  500720 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-850436"
	W1016 19:46:16.597456  500720 addons.go:247] addon dashboard should already be in state true
	I1016 19:46:16.597504  500720 host.go:66] Checking if "default-k8s-diff-port-850436" exists ...
	I1016 19:46:16.597976  500720 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:46:16.598509  500720 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-850436"
	I1016 19:46:16.598537  500720 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-850436"
	I1016 19:46:16.598817  500720 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:46:16.602490  500720 out.go:179] * Verifying Kubernetes components...
	I1016 19:46:16.607439  500720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:46:16.658732  500720 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1016 19:46:16.661747  500720 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 19:46:16.664110  500720 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-850436"
	W1016 19:46:16.664130  500720 addons.go:247] addon default-storageclass should already be in state true
	I1016 19:46:16.664158  500720 host.go:66] Checking if "default-k8s-diff-port-850436" exists ...
	I1016 19:46:16.664572  500720 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-850436 --format={{.State.Status}}
	I1016 19:46:16.666323  500720 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:46:16.666344  500720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 19:46:16.666399  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:16.666538  500720 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1016 19:46:16.190417  498106 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 6.368014024s
	I1016 19:46:18.842582  498106 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 9.018079962s
	I1016 19:46:20.324842  498106 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.503018435s
	I1016 19:46:20.346274  498106 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 19:46:20.368299  498106 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 19:46:20.387412  498106 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 19:46:20.387758  498106 kubeadm.go:318] [mark-control-plane] Marking the node auto-078761 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 19:46:20.409908  498106 kubeadm.go:318] [bootstrap-token] Using token: hj4xzy.uo6gwxqsrkjkbkd0
	I1016 19:46:16.669492  500720 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1016 19:46:16.669516  500720 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1016 19:46:16.669588  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:16.712710  500720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:46:16.717268  500720 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 19:46:16.717288  500720 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 19:46:16.717353  500720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-850436
	I1016 19:46:16.735462  500720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:46:16.752579  500720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/default-k8s-diff-port-850436/id_rsa Username:docker}
	I1016 19:46:17.104998  500720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:46:17.185925  500720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:46:17.231700  500720 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1016 19:46:17.231726  500720 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1016 19:46:17.247361  500720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 19:46:17.346877  500720 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1016 19:46:17.346898  500720 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1016 19:46:17.441025  500720 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1016 19:46:17.441045  500720 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1016 19:46:17.691810  500720 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1016 19:46:17.691829  500720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1016 19:46:17.774364  500720 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1016 19:46:17.774386  500720 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1016 19:46:17.818174  500720 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1016 19:46:17.818239  500720 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1016 19:46:17.871045  500720 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1016 19:46:17.871109  500720 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1016 19:46:17.922580  500720 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1016 19:46:17.922645  500720 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1016 19:46:17.972266  500720 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1016 19:46:17.972333  500720 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1016 19:46:18.009925  500720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1016 19:46:20.412777  498106 out.go:252]   - Configuring RBAC rules ...
	I1016 19:46:20.412946  498106 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 19:46:20.421775  498106 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 19:46:20.441189  498106 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 19:46:20.445608  498106 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 19:46:20.452571  498106 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 19:46:20.457100  498106 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 19:46:20.739479  498106 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 19:46:21.360262  498106 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 19:46:21.738981  498106 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 19:46:21.741920  498106 kubeadm.go:318] 
	I1016 19:46:21.741999  498106 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 19:46:21.742006  498106 kubeadm.go:318] 
	I1016 19:46:21.742085  498106 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 19:46:21.742090  498106 kubeadm.go:318] 
	I1016 19:46:21.742116  498106 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 19:46:21.742655  498106 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 19:46:21.742727  498106 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 19:46:21.742734  498106 kubeadm.go:318] 
	I1016 19:46:21.742791  498106 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 19:46:21.742795  498106 kubeadm.go:318] 
	I1016 19:46:21.742851  498106 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 19:46:21.742858  498106 kubeadm.go:318] 
	I1016 19:46:21.742912  498106 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 19:46:21.742990  498106 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 19:46:21.743061  498106 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 19:46:21.743070  498106 kubeadm.go:318] 
	I1016 19:46:21.743472  498106 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 19:46:21.743560  498106 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 19:46:21.743565  498106 kubeadm.go:318] 
	I1016 19:46:21.743914  498106 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token hj4xzy.uo6gwxqsrkjkbkd0 \
	I1016 19:46:21.744027  498106 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:85a5ed38e8d77f9704e1d1edb99d03efd3921606f6351dbe0cfb02fc48526847 \
	I1016 19:46:21.744256  498106 kubeadm.go:318] 	--control-plane 
	I1016 19:46:21.744267  498106 kubeadm.go:318] 
	I1016 19:46:21.744643  498106 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 19:46:21.744653  498106 kubeadm.go:318] 
	I1016 19:46:21.744957  498106 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token hj4xzy.uo6gwxqsrkjkbkd0 \
	I1016 19:46:21.745347  498106 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:85a5ed38e8d77f9704e1d1edb99d03efd3921606f6351dbe0cfb02fc48526847 
	I1016 19:46:21.750769  498106 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1016 19:46:21.751119  498106 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1016 19:46:21.751247  498106 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1016 19:46:21.751257  498106 cni.go:84] Creating CNI manager for ""
	I1016 19:46:21.751264  498106 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 19:46:21.756796  498106 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 19:46:21.759672  498106 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 19:46:21.768346  498106 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 19:46:21.768365  498106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 19:46:21.802780  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 19:46:22.359535  498106 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 19:46:22.359670  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:46:22.359771  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-078761 minikube.k8s.io/updated_at=2025_10_16T19_46_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=auto-078761 minikube.k8s.io/primary=true
	I1016 19:46:22.655669  498106 ops.go:34] apiserver oom_adj: -16
	I1016 19:46:22.655781  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:46:23.156754  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:46:23.656321  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:46:24.155848  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:46:24.656257  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:46:26.190161  500720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.085126806s)
	I1016 19:46:26.190220  500720 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.004272204s)
	I1016 19:46:26.190249  500720 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-850436" to be "Ready" ...
	I1016 19:46:26.190580  500720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.943152314s)
	I1016 19:46:26.190857  500720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.180808521s)
	I1016 19:46:26.194075  500720 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-850436 addons enable metrics-server
	
	I1016 19:46:26.236480  500720 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1016 19:46:25.156279  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:46:25.656679  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:46:26.156202  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:46:26.655935  498106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 19:46:26.763014  498106 kubeadm.go:1113] duration metric: took 4.403387394s to wait for elevateKubeSystemPrivileges
	I1016 19:46:26.763040  498106 kubeadm.go:402] duration metric: took 27.111078057s to StartCluster
	I1016 19:46:26.763057  498106 settings.go:142] acquiring lock: {Name:mkb4a7f9606cae3a865d252d55f83ffb633256f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:46:26.763120  498106 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:46:26.764094  498106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-288457/kubeconfig: {Name:mk0de71d207db6907d91f8dfaf45a545b1e805db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 19:46:26.764320  498106 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 19:46:26.764411  498106 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 19:46:26.764652  498106 config.go:182] Loaded profile config "auto-078761": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:46:26.764700  498106 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 19:46:26.764766  498106 addons.go:69] Setting storage-provisioner=true in profile "auto-078761"
	I1016 19:46:26.764780  498106 addons.go:238] Setting addon storage-provisioner=true in "auto-078761"
	I1016 19:46:26.764816  498106 host.go:66] Checking if "auto-078761" exists ...
	I1016 19:46:26.765378  498106 addons.go:69] Setting default-storageclass=true in profile "auto-078761"
	I1016 19:46:26.765404  498106 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-078761"
	I1016 19:46:26.765655  498106 cli_runner.go:164] Run: docker container inspect auto-078761 --format={{.State.Status}}
	I1016 19:46:26.765933  498106 cli_runner.go:164] Run: docker container inspect auto-078761 --format={{.State.Status}}
	I1016 19:46:26.768082  498106 out.go:179] * Verifying Kubernetes components...
	I1016 19:46:26.771541  498106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 19:46:26.805507  498106 addons.go:238] Setting addon default-storageclass=true in "auto-078761"
	I1016 19:46:26.805553  498106 host.go:66] Checking if "auto-078761" exists ...
	I1016 19:46:26.805983  498106 cli_runner.go:164] Run: docker container inspect auto-078761 --format={{.State.Status}}
	I1016 19:46:26.823787  498106 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 19:46:26.826864  498106 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:46:26.826888  498106 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 19:46:26.826957  498106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-078761
	I1016 19:46:26.846988  498106 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 19:46:26.847008  498106 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 19:46:26.847072  498106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-078761
	I1016 19:46:26.877374  498106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/auto-078761/id_rsa Username:docker}
	I1016 19:46:26.880903  498106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/auto-078761/id_rsa Username:docker}
	I1016 19:46:27.257117  498106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 19:46:27.299502  498106 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 19:46:27.299695  498106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 19:46:27.339529  498106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 19:46:28.175598  498106 node_ready.go:35] waiting up to 15m0s for node "auto-078761" to be "Ready" ...
	I1016 19:46:28.175986  498106 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1016 19:46:28.229662  498106 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1016 19:46:28.232651  498106 addons.go:514] duration metric: took 1.467933946s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1016 19:46:28.679987  498106 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-078761" context rescaled to 1 replicas
	I1016 19:46:26.238820  500720 node_ready.go:49] node "default-k8s-diff-port-850436" is "Ready"
	I1016 19:46:26.238889  500720 node_ready.go:38] duration metric: took 48.618223ms for node "default-k8s-diff-port-850436" to be "Ready" ...
	I1016 19:46:26.238919  500720 api_server.go:52] waiting for apiserver process to appear ...
	I1016 19:46:26.239012  500720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 19:46:26.241903  500720 addons.go:514] duration metric: took 9.645710356s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1016 19:46:26.255905  500720 api_server.go:72] duration metric: took 9.660040651s to wait for apiserver process to appear ...
	I1016 19:46:26.255980  500720 api_server.go:88] waiting for apiserver healthz status ...
	I1016 19:46:26.256012  500720 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1016 19:46:26.264842  500720 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1016 19:46:26.270159  500720 api_server.go:141] control plane version: v1.34.1
	I1016 19:46:26.270234  500720 api_server.go:131] duration metric: took 14.23417ms to wait for apiserver health ...
	I1016 19:46:26.270260  500720 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 19:46:26.280292  500720 system_pods.go:59] 8 kube-system pods found
	I1016 19:46:26.280380  500720 system_pods.go:61] "coredns-66bc5c9577-vnm65" [448486e9-ec0e-40c3-b106-5199d6090906] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:46:26.280405  500720 system_pods.go:61] "etcd-default-k8s-diff-port-850436" [239f4f2b-4e12-47a6-83bb-86b0144b67fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 19:46:26.280447  500720 system_pods.go:61] "kindnet-x85fg" [d4767810-daa5-4517-ba09-8bf6504516b2] Running
	I1016 19:46:26.280476  500720 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850436" [58577b33-3ea0-4618-b42e-afadd777a45c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 19:46:26.280500  500720 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850436" [458d5d16-d6bc-4b97-94cc-0305f13a95a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 19:46:26.280522  500720 system_pods.go:61] "kube-proxy-2l5ck" [fb08d80e-eae2-4cfe-adec-7dff53b69338] Running
	I1016 19:46:26.280559  500720 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850436" [45fc8dad-2ab6-46df-b7f3-e4508cd3fc2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 19:46:26.280585  500720 system_pods.go:61] "storage-provisioner" [4d591848-c88d-48c6-9cb8-6c660c47d3c6] Running
	I1016 19:46:26.280607  500720 system_pods.go:74] duration metric: took 10.32895ms to wait for pod list to return data ...
	I1016 19:46:26.280628  500720 default_sa.go:34] waiting for default service account to be created ...
	I1016 19:46:26.283732  500720 default_sa.go:45] found service account: "default"
	I1016 19:46:26.283799  500720 default_sa.go:55] duration metric: took 3.149592ms for default service account to be created ...
	I1016 19:46:26.283822  500720 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 19:46:26.287835  500720 system_pods.go:86] 8 kube-system pods found
	I1016 19:46:26.287920  500720 system_pods.go:89] "coredns-66bc5c9577-vnm65" [448486e9-ec0e-40c3-b106-5199d6090906] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:46:26.287951  500720 system_pods.go:89] "etcd-default-k8s-diff-port-850436" [239f4f2b-4e12-47a6-83bb-86b0144b67fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 19:46:26.287990  500720 system_pods.go:89] "kindnet-x85fg" [d4767810-daa5-4517-ba09-8bf6504516b2] Running
	I1016 19:46:26.288020  500720 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-850436" [58577b33-3ea0-4618-b42e-afadd777a45c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 19:46:26.288043  500720 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-850436" [458d5d16-d6bc-4b97-94cc-0305f13a95a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 19:46:26.288065  500720 system_pods.go:89] "kube-proxy-2l5ck" [fb08d80e-eae2-4cfe-adec-7dff53b69338] Running
	I1016 19:46:26.288099  500720 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-850436" [45fc8dad-2ab6-46df-b7f3-e4508cd3fc2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 19:46:26.288124  500720 system_pods.go:89] "storage-provisioner" [4d591848-c88d-48c6-9cb8-6c660c47d3c6] Running
	I1016 19:46:26.288147  500720 system_pods.go:126] duration metric: took 4.306127ms to wait for k8s-apps to be running ...
	I1016 19:46:26.288168  500720 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 19:46:26.288251  500720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:46:26.303571  500720 system_svc.go:56] duration metric: took 15.395308ms WaitForService to wait for kubelet
	I1016 19:46:26.303642  500720 kubeadm.go:586] duration metric: took 9.707780653s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:46:26.303676  500720 node_conditions.go:102] verifying NodePressure condition ...
	I1016 19:46:26.307157  500720 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 19:46:26.307231  500720 node_conditions.go:123] node cpu capacity is 2
	I1016 19:46:26.307257  500720 node_conditions.go:105] duration metric: took 3.559615ms to run NodePressure ...
	I1016 19:46:26.307281  500720 start.go:241] waiting for startup goroutines ...
	I1016 19:46:26.307314  500720 start.go:246] waiting for cluster config update ...
	I1016 19:46:26.307346  500720 start.go:255] writing updated cluster config ...
	I1016 19:46:26.307674  500720 ssh_runner.go:195] Run: rm -f paused
	I1016 19:46:26.312185  500720 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:46:26.316080  500720 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vnm65" in "kube-system" namespace to be "Ready" or be gone ...
	W1016 19:46:28.385532  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:30.822875  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:30.179238  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:32.182406  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:34.678920  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:32.823659  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:35.322663  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:37.178575  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:39.180062  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:37.821467  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:40.322447  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:41.679475  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:43.679843  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:42.821425  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:44.822382  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:46.179357  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:48.179456  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:47.321562  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:49.322204  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:50.179832  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:52.179938  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:54.180046  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:51.322307  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:53.322368  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:55.821828  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	W1016 19:46:56.678439  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:59.178933  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:46:58.322333  500720 pod_ready.go:104] pod "coredns-66bc5c9577-vnm65" is not "Ready", error: <nil>
	I1016 19:46:59.322151  500720 pod_ready.go:94] pod "coredns-66bc5c9577-vnm65" is "Ready"
	I1016 19:46:59.322184  500720 pod_ready.go:86] duration metric: took 33.006040892s for pod "coredns-66bc5c9577-vnm65" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:46:59.325182  500720 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:46:59.330217  500720 pod_ready.go:94] pod "etcd-default-k8s-diff-port-850436" is "Ready"
	I1016 19:46:59.330242  500720 pod_ready.go:86] duration metric: took 5.036196ms for pod "etcd-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:46:59.332793  500720 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:46:59.338352  500720 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-850436" is "Ready"
	I1016 19:46:59.338381  500720 pod_ready.go:86] duration metric: took 5.56362ms for pod "kube-apiserver-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:46:59.340607  500720 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:46:59.521164  500720 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-850436" is "Ready"
	I1016 19:46:59.521189  500720 pod_ready.go:86] duration metric: took 180.55668ms for pod "kube-controller-manager-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:46:59.719984  500720 pod_ready.go:83] waiting for pod "kube-proxy-2l5ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:00.122846  500720 pod_ready.go:94] pod "kube-proxy-2l5ck" is "Ready"
	I1016 19:47:00.122875  500720 pod_ready.go:86] duration metric: took 402.861351ms for pod "kube-proxy-2l5ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:00.322437  500720 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:00.720211  500720 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-850436" is "Ready"
	I1016 19:47:00.720243  500720 pod_ready.go:86] duration metric: took 397.774834ms for pod "kube-scheduler-default-k8s-diff-port-850436" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:00.720255  500720 pod_ready.go:40] duration metric: took 34.407993349s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:47:00.780381  500720 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1016 19:47:00.784141  500720 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-850436" cluster and "default" namespace by default
	W1016 19:47:01.182047  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:47:03.678933  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:47:05.679047  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	W1016 19:47:08.179396  498106 node_ready.go:57] node "auto-078761" has "Ready":"False" status (will retry)
	I1016 19:47:08.679578  498106 node_ready.go:49] node "auto-078761" is "Ready"
	I1016 19:47:08.679612  498106 node_ready.go:38] duration metric: took 40.503940159s for node "auto-078761" to be "Ready" ...
	I1016 19:47:08.679626  498106 api_server.go:52] waiting for apiserver process to appear ...
	I1016 19:47:08.679686  498106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 19:47:08.691721  498106 api_server.go:72] duration metric: took 41.927364569s to wait for apiserver process to appear ...
	I1016 19:47:08.691757  498106 api_server.go:88] waiting for apiserver healthz status ...
	I1016 19:47:08.691778  498106 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1016 19:47:08.699986  498106 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1016 19:47:08.701020  498106 api_server.go:141] control plane version: v1.34.1
	I1016 19:47:08.701043  498106 api_server.go:131] duration metric: took 9.278816ms to wait for apiserver health ...
	I1016 19:47:08.701052  498106 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 19:47:08.704165  498106 system_pods.go:59] 8 kube-system pods found
	I1016 19:47:08.704203  498106 system_pods.go:61] "coredns-66bc5c9577-46x84" [a046c5b5-2f1a-41a3-a08b-23ce5250dfe3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:47:08.704210  498106 system_pods.go:61] "etcd-auto-078761" [eee7cc6b-5eca-4788-b212-06dcf44d0616] Running
	I1016 19:47:08.704216  498106 system_pods.go:61] "kindnet-2rx9m" [31294f09-d843-4736-a5a9-488fff4ebd9c] Running
	I1016 19:47:08.704221  498106 system_pods.go:61] "kube-apiserver-auto-078761" [6d8f96a0-9aa7-4228-9d93-1a965b823e49] Running
	I1016 19:47:08.704225  498106 system_pods.go:61] "kube-controller-manager-auto-078761" [59d840b0-351e-4291-b424-a73f03080ffd] Running
	I1016 19:47:08.704241  498106 system_pods.go:61] "kube-proxy-x4869" [a7c82db2-e6f9-46b6-bfc2-be2f6e45d7f4] Running
	I1016 19:47:08.704249  498106 system_pods.go:61] "kube-scheduler-auto-078761" [a42dae50-0a9e-488f-9c0c-6d0a85a6a855] Running
	I1016 19:47:08.704255  498106 system_pods.go:61] "storage-provisioner" [2e1d7a3c-fcf5-438a-ac73-359df1c527b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:47:08.704262  498106 system_pods.go:74] duration metric: took 3.203567ms to wait for pod list to return data ...
	I1016 19:47:08.704273  498106 default_sa.go:34] waiting for default service account to be created ...
	I1016 19:47:08.706790  498106 default_sa.go:45] found service account: "default"
	I1016 19:47:08.706814  498106 default_sa.go:55] duration metric: took 2.534668ms for default service account to be created ...
	I1016 19:47:08.706823  498106 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 19:47:08.715224  498106 system_pods.go:86] 8 kube-system pods found
	I1016 19:47:08.715260  498106 system_pods.go:89] "coredns-66bc5c9577-46x84" [a046c5b5-2f1a-41a3-a08b-23ce5250dfe3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:47:08.715267  498106 system_pods.go:89] "etcd-auto-078761" [eee7cc6b-5eca-4788-b212-06dcf44d0616] Running
	I1016 19:47:08.715277  498106 system_pods.go:89] "kindnet-2rx9m" [31294f09-d843-4736-a5a9-488fff4ebd9c] Running
	I1016 19:47:08.715282  498106 system_pods.go:89] "kube-apiserver-auto-078761" [6d8f96a0-9aa7-4228-9d93-1a965b823e49] Running
	I1016 19:47:08.715310  498106 system_pods.go:89] "kube-controller-manager-auto-078761" [59d840b0-351e-4291-b424-a73f03080ffd] Running
	I1016 19:47:08.715324  498106 system_pods.go:89] "kube-proxy-x4869" [a7c82db2-e6f9-46b6-bfc2-be2f6e45d7f4] Running
	I1016 19:47:08.715329  498106 system_pods.go:89] "kube-scheduler-auto-078761" [a42dae50-0a9e-488f-9c0c-6d0a85a6a855] Running
	I1016 19:47:08.715335  498106 system_pods.go:89] "storage-provisioner" [2e1d7a3c-fcf5-438a-ac73-359df1c527b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:47:08.715365  498106 retry.go:31] will retry after 278.562904ms: missing components: kube-dns
	I1016 19:47:09.001085  498106 system_pods.go:86] 8 kube-system pods found
	I1016 19:47:09.001122  498106 system_pods.go:89] "coredns-66bc5c9577-46x84" [a046c5b5-2f1a-41a3-a08b-23ce5250dfe3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:47:09.001129  498106 system_pods.go:89] "etcd-auto-078761" [eee7cc6b-5eca-4788-b212-06dcf44d0616] Running
	I1016 19:47:09.001180  498106 system_pods.go:89] "kindnet-2rx9m" [31294f09-d843-4736-a5a9-488fff4ebd9c] Running
	I1016 19:47:09.001186  498106 system_pods.go:89] "kube-apiserver-auto-078761" [6d8f96a0-9aa7-4228-9d93-1a965b823e49] Running
	I1016 19:47:09.001196  498106 system_pods.go:89] "kube-controller-manager-auto-078761" [59d840b0-351e-4291-b424-a73f03080ffd] Running
	I1016 19:47:09.001201  498106 system_pods.go:89] "kube-proxy-x4869" [a7c82db2-e6f9-46b6-bfc2-be2f6e45d7f4] Running
	I1016 19:47:09.001211  498106 system_pods.go:89] "kube-scheduler-auto-078761" [a42dae50-0a9e-488f-9c0c-6d0a85a6a855] Running
	I1016 19:47:09.001217  498106 system_pods.go:89] "storage-provisioner" [2e1d7a3c-fcf5-438a-ac73-359df1c527b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:47:09.001242  498106 retry.go:31] will retry after 340.546737ms: missing components: kube-dns
	I1016 19:47:09.347024  498106 system_pods.go:86] 8 kube-system pods found
	I1016 19:47:09.347068  498106 system_pods.go:89] "coredns-66bc5c9577-46x84" [a046c5b5-2f1a-41a3-a08b-23ce5250dfe3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:47:09.347077  498106 system_pods.go:89] "etcd-auto-078761" [eee7cc6b-5eca-4788-b212-06dcf44d0616] Running
	I1016 19:47:09.347084  498106 system_pods.go:89] "kindnet-2rx9m" [31294f09-d843-4736-a5a9-488fff4ebd9c] Running
	I1016 19:47:09.347094  498106 system_pods.go:89] "kube-apiserver-auto-078761" [6d8f96a0-9aa7-4228-9d93-1a965b823e49] Running
	I1016 19:47:09.347099  498106 system_pods.go:89] "kube-controller-manager-auto-078761" [59d840b0-351e-4291-b424-a73f03080ffd] Running
	I1016 19:47:09.347103  498106 system_pods.go:89] "kube-proxy-x4869" [a7c82db2-e6f9-46b6-bfc2-be2f6e45d7f4] Running
	I1016 19:47:09.347107  498106 system_pods.go:89] "kube-scheduler-auto-078761" [a42dae50-0a9e-488f-9c0c-6d0a85a6a855] Running
	I1016 19:47:09.347124  498106 system_pods.go:89] "storage-provisioner" [2e1d7a3c-fcf5-438a-ac73-359df1c527b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:47:09.347145  498106 retry.go:31] will retry after 339.55518ms: missing components: kube-dns
	I1016 19:47:09.690504  498106 system_pods.go:86] 8 kube-system pods found
	I1016 19:47:09.690542  498106 system_pods.go:89] "coredns-66bc5c9577-46x84" [a046c5b5-2f1a-41a3-a08b-23ce5250dfe3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 19:47:09.690549  498106 system_pods.go:89] "etcd-auto-078761" [eee7cc6b-5eca-4788-b212-06dcf44d0616] Running
	I1016 19:47:09.690555  498106 system_pods.go:89] "kindnet-2rx9m" [31294f09-d843-4736-a5a9-488fff4ebd9c] Running
	I1016 19:47:09.690559  498106 system_pods.go:89] "kube-apiserver-auto-078761" [6d8f96a0-9aa7-4228-9d93-1a965b823e49] Running
	I1016 19:47:09.690564  498106 system_pods.go:89] "kube-controller-manager-auto-078761" [59d840b0-351e-4291-b424-a73f03080ffd] Running
	I1016 19:47:09.690570  498106 system_pods.go:89] "kube-proxy-x4869" [a7c82db2-e6f9-46b6-bfc2-be2f6e45d7f4] Running
	I1016 19:47:09.690576  498106 system_pods.go:89] "kube-scheduler-auto-078761" [a42dae50-0a9e-488f-9c0c-6d0a85a6a855] Running
	I1016 19:47:09.690587  498106 system_pods.go:89] "storage-provisioner" [2e1d7a3c-fcf5-438a-ac73-359df1c527b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 19:47:09.690603  498106 retry.go:31] will retry after 462.212589ms: missing components: kube-dns
	I1016 19:47:10.157567  498106 system_pods.go:86] 8 kube-system pods found
	I1016 19:47:10.157644  498106 system_pods.go:89] "coredns-66bc5c9577-46x84" [a046c5b5-2f1a-41a3-a08b-23ce5250dfe3] Running
	I1016 19:47:10.157660  498106 system_pods.go:89] "etcd-auto-078761" [eee7cc6b-5eca-4788-b212-06dcf44d0616] Running
	I1016 19:47:10.157665  498106 system_pods.go:89] "kindnet-2rx9m" [31294f09-d843-4736-a5a9-488fff4ebd9c] Running
	I1016 19:47:10.157669  498106 system_pods.go:89] "kube-apiserver-auto-078761" [6d8f96a0-9aa7-4228-9d93-1a965b823e49] Running
	I1016 19:47:10.157673  498106 system_pods.go:89] "kube-controller-manager-auto-078761" [59d840b0-351e-4291-b424-a73f03080ffd] Running
	I1016 19:47:10.157681  498106 system_pods.go:89] "kube-proxy-x4869" [a7c82db2-e6f9-46b6-bfc2-be2f6e45d7f4] Running
	I1016 19:47:10.157695  498106 system_pods.go:89] "kube-scheduler-auto-078761" [a42dae50-0a9e-488f-9c0c-6d0a85a6a855] Running
	I1016 19:47:10.157699  498106 system_pods.go:89] "storage-provisioner" [2e1d7a3c-fcf5-438a-ac73-359df1c527b8] Running
	I1016 19:47:10.157731  498106 system_pods.go:126] duration metric: took 1.450901957s to wait for k8s-apps to be running ...
	I1016 19:47:10.157751  498106 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 19:47:10.157852  498106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:47:10.173822  498106 system_svc.go:56] duration metric: took 16.055792ms WaitForService to wait for kubelet
	I1016 19:47:10.173852  498106 kubeadm.go:586] duration metric: took 43.409499747s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 19:47:10.173870  498106 node_conditions.go:102] verifying NodePressure condition ...
	I1016 19:47:10.182529  498106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1016 19:47:10.182566  498106 node_conditions.go:123] node cpu capacity is 2
	I1016 19:47:10.182590  498106 node_conditions.go:105] duration metric: took 8.705082ms to run NodePressure ...
	I1016 19:47:10.182621  498106 start.go:241] waiting for startup goroutines ...
	I1016 19:47:10.182637  498106 start.go:246] waiting for cluster config update ...
	I1016 19:47:10.182648  498106 start.go:255] writing updated cluster config ...
	I1016 19:47:10.183033  498106 ssh_runner.go:195] Run: rm -f paused
	I1016 19:47:10.187466  498106 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:47:10.191451  498106 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-46x84" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:10.196484  498106 pod_ready.go:94] pod "coredns-66bc5c9577-46x84" is "Ready"
	I1016 19:47:10.196518  498106 pod_ready.go:86] duration metric: took 5.039224ms for pod "coredns-66bc5c9577-46x84" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:10.198965  498106 pod_ready.go:83] waiting for pod "etcd-auto-078761" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:10.203679  498106 pod_ready.go:94] pod "etcd-auto-078761" is "Ready"
	I1016 19:47:10.203706  498106 pod_ready.go:86] duration metric: took 4.715963ms for pod "etcd-auto-078761" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:10.206299  498106 pod_ready.go:83] waiting for pod "kube-apiserver-auto-078761" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:10.210823  498106 pod_ready.go:94] pod "kube-apiserver-auto-078761" is "Ready"
	I1016 19:47:10.210855  498106 pod_ready.go:86] duration metric: took 4.529154ms for pod "kube-apiserver-auto-078761" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:10.213471  498106 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-078761" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:10.592153  498106 pod_ready.go:94] pod "kube-controller-manager-auto-078761" is "Ready"
	I1016 19:47:10.592177  498106 pod_ready.go:86] duration metric: took 378.682788ms for pod "kube-controller-manager-auto-078761" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:10.791174  498106 pod_ready.go:83] waiting for pod "kube-proxy-x4869" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:11.191802  498106 pod_ready.go:94] pod "kube-proxy-x4869" is "Ready"
	I1016 19:47:11.191830  498106 pod_ready.go:86] duration metric: took 400.626233ms for pod "kube-proxy-x4869" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:11.392233  498106 pod_ready.go:83] waiting for pod "kube-scheduler-auto-078761" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:11.791492  498106 pod_ready.go:94] pod "kube-scheduler-auto-078761" is "Ready"
	I1016 19:47:11.791522  498106 pod_ready.go:86] duration metric: took 399.258498ms for pod "kube-scheduler-auto-078761" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 19:47:11.791535  498106 pod_ready.go:40] duration metric: took 1.603988703s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 19:47:11.853304  498106 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1016 19:47:11.857701  498106 out.go:179] * Done! kubectl is now configured to use "auto-078761" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.37991573Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f17da9c2-778b-497b-b7a1-c2f1b9c96f0c name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.381253404Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2bf31473-64a4-452b-a448-9ed5d4b54083 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.382449988Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95/dashboard-metrics-scraper" id=c1ef081c-c943-4365-84c6-289e06a08fbd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.382659517Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.39236141Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.392936219Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.410531959Z" level=info msg="Created container b93060d3e7af49e77f76a0c238af703a0b5bd02650bbb1ff9d0a84489b5d595b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95/dashboard-metrics-scraper" id=c1ef081c-c943-4365-84c6-289e06a08fbd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.411199545Z" level=info msg="Starting container: b93060d3e7af49e77f76a0c238af703a0b5bd02650bbb1ff9d0a84489b5d595b" id=de023940-21df-459b-9198-2f04421fbefb name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.41380436Z" level=info msg="Started container" PID=1655 containerID=b93060d3e7af49e77f76a0c238af703a0b5bd02650bbb1ff9d0a84489b5d595b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95/dashboard-metrics-scraper id=de023940-21df-459b-9198-2f04421fbefb name=/runtime.v1.RuntimeService/StartContainer sandboxID=e0b8ecc71c6a44573e6f52a473f250d83dfcc3bceb81c53a76142df88153c068
	Oct 16 19:47:04 default-k8s-diff-port-850436 conmon[1653]: conmon b93060d3e7af49e77f76 <ninfo>: container 1655 exited with status 1
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.615732487Z" level=info msg="Removing container: 41c94ff30cc6a92048cafc2b15b9e7a44d4976b0fa3753e2677fc635b07a0be9" id=48e07b46-8431-49af-919e-5940d49a1908 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.627980586Z" level=info msg="Error loading conmon cgroup of container 41c94ff30cc6a92048cafc2b15b9e7a44d4976b0fa3753e2677fc635b07a0be9: cgroup deleted" id=48e07b46-8431-49af-919e-5940d49a1908 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:47:04 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:04.631925569Z" level=info msg="Removed container 41c94ff30cc6a92048cafc2b15b9e7a44d4976b0fa3753e2677fc635b07a0be9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95/dashboard-metrics-scraper" id=48e07b46-8431-49af-919e-5940d49a1908 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.430129067Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.439745626Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.439909091Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.440031833Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.449845826Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.449886778Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.449908522Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.454612899Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.454770447Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.454853665Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.458299762Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 19:47:05 default-k8s-diff-port-850436 crio[648]: time="2025-10-16T19:47:05.45833689Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	b93060d3e7af4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago       Exited              dashboard-metrics-scraper   2                   e0b8ecc71c6a4       dashboard-metrics-scraper-6ffb444bf9-kqn95             kubernetes-dashboard
	a3d43c810802a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   e3b00175d186c       storage-provisioner                                    kube-system
	6c01290858405       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago       Running             kubernetes-dashboard        0                   550f48919508f       kubernetes-dashboard-855c9754f9-ng9x9                  kubernetes-dashboard
	281e10eedb7a1       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   5142691233b06       busybox                                                default
	b940324d46261       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago       Running             kube-proxy                  1                   9a9f26f0c31d5       kube-proxy-2l5ck                                       kube-system
	8fbc3ea61b484       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   c365c4d49d6ff       kindnet-x85fg                                          kube-system
	eef613b6fb796       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago       Exited              storage-provisioner         1                   e3b00175d186c       storage-provisioner                                    kube-system
	188edef414d15       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   48b5c77716ea3       coredns-66bc5c9577-vnm65                               kube-system
	2921ea52af99a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   b6db1d49fcf21       kube-apiserver-default-k8s-diff-port-850436            kube-system
	f415a5edf62f2       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   8a44e814225a5       kube-controller-manager-default-k8s-diff-port-850436   kube-system
	7a3b24f9c4c6a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   8fdca4c99c05c       kube-scheduler-default-k8s-diff-port-850436            kube-system
	a3f7185e8b7d3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   a86b362171ea4       etcd-default-k8s-diff-port-850436                      kube-system
	
	
	==> coredns [188edef414d15f9fcd0a85fa49e7243fbf77dab45649e305a2e60a979dedd27f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60221 - 7660 "HINFO IN 1079641673264027012.4700828975717732101. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013758366s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-850436
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-850436
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=default-k8s-diff-port-850436
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T19_44_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 19:44:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-850436
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 19:47:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 19:47:04 +0000   Thu, 16 Oct 2025 19:44:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 19:47:04 +0000   Thu, 16 Oct 2025 19:44:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 19:47:04 +0000   Thu, 16 Oct 2025 19:44:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 19:47:04 +0000   Thu, 16 Oct 2025 19:45:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-850436
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                9d720de3-5d7a-422c-aff9-73121cba7d50
	  Boot ID:                    64b81be9-af43-418b-aa2a-604fcbda1cca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-vnm65                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-default-k8s-diff-port-850436                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m28s
	  kube-system                 kindnet-x85fg                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-default-k8s-diff-port-850436             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-850436    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-proxy-2l5ck                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-scheduler-default-k8s-diff-port-850436             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-kqn95              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ng9x9                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m21s                  kube-proxy       
	  Normal   Starting                 52s                    kube-proxy       
	  Warning  CgroupV1                 2m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m28s                  kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m28s                  kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m28s                  kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m28s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m23s                  node-controller  Node default-k8s-diff-port-850436 event: Registered Node default-k8s-diff-port-850436 in Controller
	  Normal   NodeReady                101s                   kubelet          Node default-k8s-diff-port-850436 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-850436 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node default-k8s-diff-port-850436 event: Registered Node default-k8s-diff-port-850436 in Controller
	
	
	==> dmesg <==
	[Oct16 19:23] overlayfs: idmapped layers are currently not supported
	[ +28.397927] overlayfs: idmapped layers are currently not supported
	[Oct16 19:24] overlayfs: idmapped layers are currently not supported
	[ +25.533019] overlayfs: idmapped layers are currently not supported
	[Oct16 19:26] overlayfs: idmapped layers are currently not supported
	[Oct16 19:27] overlayfs: idmapped layers are currently not supported
	[Oct16 19:29] overlayfs: idmapped layers are currently not supported
	[Oct16 19:31] overlayfs: idmapped layers are currently not supported
	[Oct16 19:32] overlayfs: idmapped layers are currently not supported
	[Oct16 19:34] overlayfs: idmapped layers are currently not supported
	[Oct16 19:36] overlayfs: idmapped layers are currently not supported
	[Oct16 19:37] overlayfs: idmapped layers are currently not supported
	[  +8.490329] overlayfs: idmapped layers are currently not supported
	[Oct16 19:38] overlayfs: idmapped layers are currently not supported
	[Oct16 19:39] overlayfs: idmapped layers are currently not supported
	[Oct16 19:40] overlayfs: idmapped layers are currently not supported
	[Oct16 19:41] overlayfs: idmapped layers are currently not supported
	[ +20.605853] overlayfs: idmapped layers are currently not supported
	[Oct16 19:43] overlayfs: idmapped layers are currently not supported
	[ +20.110477] overlayfs: idmapped layers are currently not supported
	[Oct16 19:44] overlayfs: idmapped layers are currently not supported
	[Oct16 19:45] overlayfs: idmapped layers are currently not supported
	[ +26.426905] overlayfs: idmapped layers are currently not supported
	[Oct16 19:46] overlayfs: idmapped layers are currently not supported
	[  +5.629854] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a3f7185e8b7d30b96feaff04a980ad8d52b0865f5c6a2ae6f3ecc05241267bce] <==
	{"level":"warn","ts":"2025-10-16T19:46:22.058284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.113781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.164806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.188797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.216172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.246634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.266471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.294033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.336886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.444271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.465002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.520679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.568130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.596561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.617544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.641632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.664795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.693041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.754376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.769820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.799545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.833679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.853514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.878524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T19:46:22.993092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59778","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:47:18 up  2:29,  0 user,  load average: 2.76, 3.41, 3.04
	Linux default-k8s-diff-port-850436 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8fbc3ea61b4840cffb138149604309a06a200993c1f68934c9f28f84215f43ca] <==
	I1016 19:46:25.264486       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 19:46:25.281696       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1016 19:46:25.281840       1 main.go:148] setting mtu 1500 for CNI 
	I1016 19:46:25.281852       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 19:46:25.281876       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T19:46:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 19:46:25.429169       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 19:46:25.429197       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 19:46:25.429205       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 19:46:25.429830       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1016 19:46:55.429719       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1016 19:46:55.429743       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1016 19:46:55.429799       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1016 19:46:55.429839       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1016 19:46:56.929280       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 19:46:56.929315       1 metrics.go:72] Registering metrics
	I1016 19:46:56.929388       1 controller.go:711] "Syncing nftables rules"
	I1016 19:47:05.429463       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:47:05.429510       1 main.go:301] handling current node
	I1016 19:47:15.429505       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 19:47:15.429553       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2921ea52af99aa969071fb411fb52ba0f384fcc606004df4ff328bb7b0e640a5] <==
	I1016 19:46:24.100577       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1016 19:46:24.100611       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1016 19:46:24.101623       1 aggregator.go:171] initial CRD sync complete...
	I1016 19:46:24.101652       1 autoregister_controller.go:144] Starting autoregister controller
	I1016 19:46:24.101660       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 19:46:24.101665       1 cache.go:39] Caches are synced for autoregister controller
	I1016 19:46:24.115842       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 19:46:24.116081       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1016 19:46:24.116129       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1016 19:46:24.133377       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1016 19:46:24.155046       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 19:46:24.245597       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1016 19:46:24.249893       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1016 19:46:24.318774       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1016 19:46:24.341698       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1016 19:46:24.760818       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 19:46:25.560843       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 19:46:25.715863       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 19:46:25.756760       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 19:46:25.774402       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 19:46:25.884945       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.46.26"}
	I1016 19:46:25.910460       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.167.119"}
	I1016 19:46:28.736835       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 19:46:28.784325       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 19:46:28.886027       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [f415a5edf62f2fed33a35088647cc0f9936a583cf2985d885edf35900733bab2] <==
	I1016 19:46:28.413332       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:46:28.427593       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1016 19:46:28.430719       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1016 19:46:28.430830       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1016 19:46:28.430898       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1016 19:46:28.434435       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1016 19:46:28.434515       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1016 19:46:28.437220       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1016 19:46:28.439922       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:46:28.442974       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1016 19:46:28.443029       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 19:46:28.448022       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1016 19:46:28.448131       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1016 19:46:28.448383       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 19:46:28.454692       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1016 19:46:28.454779       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1016 19:46:28.454951       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1016 19:46:28.455029       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-850436"
	I1016 19:46:28.455070       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1016 19:46:28.457565       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 19:46:28.459144       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1016 19:46:28.474928       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 19:46:28.478888       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 19:46:28.478914       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 19:46:28.478922       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [b940324d4626152d5c0c25dda09100a15cd59317900c9d608332a078d8a55714] <==
	I1016 19:46:25.537248       1 server_linux.go:53] "Using iptables proxy"
	I1016 19:46:25.654994       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 19:46:25.757275       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 19:46:25.758994       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1016 19:46:25.778614       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 19:46:26.089019       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 19:46:26.089157       1 server_linux.go:132] "Using iptables Proxier"
	I1016 19:46:26.093837       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 19:46:26.094228       1 server.go:527] "Version info" version="v1.34.1"
	I1016 19:46:26.094428       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:46:26.095712       1 config.go:200] "Starting service config controller"
	I1016 19:46:26.095768       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 19:46:26.095815       1 config.go:106] "Starting endpoint slice config controller"
	I1016 19:46:26.095842       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 19:46:26.095881       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 19:46:26.095906       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 19:46:26.100014       1 config.go:309] "Starting node config controller"
	I1016 19:46:26.100110       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 19:46:26.100143       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 19:46:26.199719       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 19:46:26.199909       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1016 19:46:26.200198       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7a3b24f9c4c6aafecdde8d6b650ec0da77e3d7b5505503d38459f34464dc2a07] <==
	I1016 19:46:20.514708       1 serving.go:386] Generated self-signed cert in-memory
	W1016 19:46:24.102479       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1016 19:46:24.102570       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1016 19:46:24.102610       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1016 19:46:24.102640       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1016 19:46:24.213066       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1016 19:46:24.213095       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 19:46:24.225260       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 19:46:24.250063       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 19:46:24.250590       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:46:24.250604       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 19:46:24.350659       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 19:46:29 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:29.015100     776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss7rp\" (UniqueName: \"kubernetes.io/projected/a086c25f-aa8c-4925-b778-32f4312b58da-kube-api-access-ss7rp\") pod \"kubernetes-dashboard-855c9754f9-ng9x9\" (UID: \"a086c25f-aa8c-4925-b778-32f4312b58da\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ng9x9"
	Oct 16 19:46:29 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:29.015339     776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/da221b3d-582e-4d7e-9190-ad4205d7a0e1-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-kqn95\" (UID: \"da221b3d-582e-4d7e-9190-ad4205d7a0e1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95"
	Oct 16 19:46:29 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:29.015476     776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q7bf\" (UniqueName: \"kubernetes.io/projected/da221b3d-582e-4d7e-9190-ad4205d7a0e1-kube-api-access-8q7bf\") pod \"dashboard-metrics-scraper-6ffb444bf9-kqn95\" (UID: \"da221b3d-582e-4d7e-9190-ad4205d7a0e1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95"
	Oct 16 19:46:29 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:29.146104     776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 16 19:46:29 default-k8s-diff-port-850436 kubelet[776]: W1016 19:46:29.312458     776 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a/crio-550f48919508f8071f3a41f22973f9e76b8bd6155ff072b93ae050dc7e9c7779 WatchSource:0}: Error finding container 550f48919508f8071f3a41f22973f9e76b8bd6155ff072b93ae050dc7e9c7779: Status 404 returned error can't find the container with id 550f48919508f8071f3a41f22973f9e76b8bd6155ff072b93ae050dc7e9c7779
	Oct 16 19:46:29 default-k8s-diff-port-850436 kubelet[776]: W1016 19:46:29.331832     776 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4aa7104008e97f7a1f7f5dac56eab810c25668ac010410a499ed189ae21dbb1a/crio-e0b8ecc71c6a44573e6f52a473f250d83dfcc3bceb81c53a76142df88153c068 WatchSource:0}: Error finding container e0b8ecc71c6a44573e6f52a473f250d83dfcc3bceb81c53a76142df88153c068: Status 404 returned error can't find the container with id e0b8ecc71c6a44573e6f52a473f250d83dfcc3bceb81c53a76142df88153c068
	Oct 16 19:46:39 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:39.543553     776 scope.go:117] "RemoveContainer" containerID="20c26f4be03399765ded8c75bbb6f522ed219d8c4f33be833024e86efa6d2bce"
	Oct 16 19:46:39 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:39.572125     776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ng9x9" podStartSLOduration=6.264343898 podStartE2EDuration="11.572106083s" podCreationTimestamp="2025-10-16 19:46:28 +0000 UTC" firstStartedPulling="2025-10-16 19:46:29.320138941 +0000 UTC m=+14.402314060" lastFinishedPulling="2025-10-16 19:46:34.627901126 +0000 UTC m=+19.710076245" observedRunningTime="2025-10-16 19:46:35.548435858 +0000 UTC m=+20.630610985" watchObservedRunningTime="2025-10-16 19:46:39.572106083 +0000 UTC m=+24.654281202"
	Oct 16 19:46:40 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:40.548392     776 scope.go:117] "RemoveContainer" containerID="20c26f4be03399765ded8c75bbb6f522ed219d8c4f33be833024e86efa6d2bce"
	Oct 16 19:46:40 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:40.548742     776 scope.go:117] "RemoveContainer" containerID="41c94ff30cc6a92048cafc2b15b9e7a44d4976b0fa3753e2677fc635b07a0be9"
	Oct 16 19:46:40 default-k8s-diff-port-850436 kubelet[776]: E1016 19:46:40.548896     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqn95_kubernetes-dashboard(da221b3d-582e-4d7e-9190-ad4205d7a0e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95" podUID="da221b3d-582e-4d7e-9190-ad4205d7a0e1"
	Oct 16 19:46:41 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:41.552439     776 scope.go:117] "RemoveContainer" containerID="41c94ff30cc6a92048cafc2b15b9e7a44d4976b0fa3753e2677fc635b07a0be9"
	Oct 16 19:46:41 default-k8s-diff-port-850436 kubelet[776]: E1016 19:46:41.552609     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqn95_kubernetes-dashboard(da221b3d-582e-4d7e-9190-ad4205d7a0e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95" podUID="da221b3d-582e-4d7e-9190-ad4205d7a0e1"
	Oct 16 19:46:49 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:49.295055     776 scope.go:117] "RemoveContainer" containerID="41c94ff30cc6a92048cafc2b15b9e7a44d4976b0fa3753e2677fc635b07a0be9"
	Oct 16 19:46:49 default-k8s-diff-port-850436 kubelet[776]: E1016 19:46:49.295251     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqn95_kubernetes-dashboard(da221b3d-582e-4d7e-9190-ad4205d7a0e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95" podUID="da221b3d-582e-4d7e-9190-ad4205d7a0e1"
	Oct 16 19:46:55 default-k8s-diff-port-850436 kubelet[776]: I1016 19:46:55.587875     776 scope.go:117] "RemoveContainer" containerID="eef613b6fb796e0cad4b501a6d0821685cd8a7c54283320e4f50f4d158511a2a"
	Oct 16 19:47:04 default-k8s-diff-port-850436 kubelet[776]: I1016 19:47:04.379410     776 scope.go:117] "RemoveContainer" containerID="41c94ff30cc6a92048cafc2b15b9e7a44d4976b0fa3753e2677fc635b07a0be9"
	Oct 16 19:47:04 default-k8s-diff-port-850436 kubelet[776]: I1016 19:47:04.613920     776 scope.go:117] "RemoveContainer" containerID="41c94ff30cc6a92048cafc2b15b9e7a44d4976b0fa3753e2677fc635b07a0be9"
	Oct 16 19:47:04 default-k8s-diff-port-850436 kubelet[776]: I1016 19:47:04.614153     776 scope.go:117] "RemoveContainer" containerID="b93060d3e7af49e77f76a0c238af703a0b5bd02650bbb1ff9d0a84489b5d595b"
	Oct 16 19:47:04 default-k8s-diff-port-850436 kubelet[776]: E1016 19:47:04.614322     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqn95_kubernetes-dashboard(da221b3d-582e-4d7e-9190-ad4205d7a0e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95" podUID="da221b3d-582e-4d7e-9190-ad4205d7a0e1"
	Oct 16 19:47:09 default-k8s-diff-port-850436 kubelet[776]: I1016 19:47:09.295239     776 scope.go:117] "RemoveContainer" containerID="b93060d3e7af49e77f76a0c238af703a0b5bd02650bbb1ff9d0a84489b5d595b"
	Oct 16 19:47:09 default-k8s-diff-port-850436 kubelet[776]: E1016 19:47:09.295908     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqn95_kubernetes-dashboard(da221b3d-582e-4d7e-9190-ad4205d7a0e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqn95" podUID="da221b3d-582e-4d7e-9190-ad4205d7a0e1"
	Oct 16 19:47:13 default-k8s-diff-port-850436 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 19:47:13 default-k8s-diff-port-850436 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 19:47:13 default-k8s-diff-port-850436 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [6c012908584051b30602aa87822b512b418fdd18370e18b61ac73fdae4230834] <==
	2025/10/16 19:46:34 Using namespace: kubernetes-dashboard
	2025/10/16 19:46:34 Using in-cluster config to connect to apiserver
	2025/10/16 19:46:34 Using secret token for csrf signing
	2025/10/16 19:46:34 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/16 19:46:34 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/16 19:46:34 Successful initial request to the apiserver, version: v1.34.1
	2025/10/16 19:46:34 Generating JWE encryption key
	2025/10/16 19:46:34 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/16 19:46:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/16 19:46:35 Initializing JWE encryption key from synchronized object
	2025/10/16 19:46:35 Creating in-cluster Sidecar client
	2025/10/16 19:46:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 19:46:35 Serving insecurely on HTTP port: 9090
	2025/10/16 19:47:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 19:46:34 Starting overwatch
	
	
	==> storage-provisioner [a3d43c810802a012980bb607f3ee226f9b47a963fa02f3bd528833fe420201ba] <==
	I1016 19:46:55.649534       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 19:46:55.662656       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 19:46:55.662725       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1016 19:46:55.664895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:46:59.120680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:47:03.382074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:47:06.979868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:47:10.033564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:47:13.056911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:47:13.063960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 19:47:13.064150       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 19:47:13.064771       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"371d6569-f6ea-4eb0-a7cb-5543888dcf96", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-850436_14a473fb-f1e6-4fb0-a565-f2a20a7ccffa became leader
	I1016 19:47:13.064938       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-850436_14a473fb-f1e6-4fb0-a565-f2a20a7ccffa!
	W1016 19:47:13.098771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:47:13.102626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 19:47:13.165303       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-850436_14a473fb-f1e6-4fb0-a565-f2a20a7ccffa!
	W1016 19:47:15.105900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:47:15.123852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:47:17.127124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:47:17.134657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:47:19.138846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 19:47:19.144092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [eef613b6fb796e0cad4b501a6d0821685cd8a7c54283320e4f50f4d158511a2a] <==
	I1016 19:46:25.053889       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1016 19:46:55.059845       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-850436 -n default-k8s-diff-port-850436
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-850436 -n default-k8s-diff-port-850436: exit status 2 (365.308239ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-850436 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.12s)

                                                
                                    

Test pass (257/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 38.6
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 38.55
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.57
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 179.75
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 10.78
48 TestAddons/StoppedEnableDisable 12.46
49 TestCertOptions 40.45
50 TestCertExpiration 243.82
52 TestForceSystemdFlag 50.43
53 TestForceSystemdEnv 43.39
59 TestErrorSpam/setup 33.47
60 TestErrorSpam/start 0.81
61 TestErrorSpam/status 1.1
62 TestErrorSpam/pause 6.65
63 TestErrorSpam/unpause 6.15
64 TestErrorSpam/stop 1.52
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 82.42
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 27.73
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.5
76 TestFunctional/serial/CacheCmd/cache/add_local 1.13
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.79
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
84 TestFunctional/serial/ExtraConfig 53.89
85 TestFunctional/serial/ComponentHealth 0.12
86 TestFunctional/serial/LogsCmd 1.49
87 TestFunctional/serial/LogsFileCmd 1.49
88 TestFunctional/serial/InvalidService 4.16
90 TestFunctional/parallel/ConfigCmd 0.46
91 TestFunctional/parallel/DashboardCmd 7.78
92 TestFunctional/parallel/DryRun 0.46
93 TestFunctional/parallel/InternationalLanguage 0.22
94 TestFunctional/parallel/StatusCmd 1.06
99 TestFunctional/parallel/AddonsCmd 0.16
100 TestFunctional/parallel/PersistentVolumeClaim 31.49
102 TestFunctional/parallel/SSHCmd 0.73
103 TestFunctional/parallel/CpCmd 2.59
105 TestFunctional/parallel/FileSync 0.36
106 TestFunctional/parallel/CertSync 2.19
110 TestFunctional/parallel/NodeLabels 0.13
112 TestFunctional/parallel/NonActiveRuntimeDisabled 1.13
114 TestFunctional/parallel/License 0.31
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.76
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.49
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
128 TestFunctional/parallel/ProfileCmd/profile_list 0.43
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
130 TestFunctional/parallel/MountCmd/any-port 6.38
131 TestFunctional/parallel/MountCmd/specific-port 1.93
132 TestFunctional/parallel/MountCmd/VerifyCleanup 1.8
133 TestFunctional/parallel/ServiceCmd/List 1.43
134 TestFunctional/parallel/ServiceCmd/JSONOutput 1.54
135 TestFunctional/parallel/Version/short 0.08
136 TestFunctional/parallel/Version/components 1.32
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.94
144 TestFunctional/parallel/ImageCommands/Setup 0.83
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.67
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 172.83
164 TestMultiControlPlane/serial/DeployApp 7.51
165 TestMultiControlPlane/serial/PingHostFromPods 1.55
166 TestMultiControlPlane/serial/AddWorkerNode 60.54
167 TestMultiControlPlane/serial/NodeLabels 0.11
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.07
169 TestMultiControlPlane/serial/CopyFile 19.97
170 TestMultiControlPlane/serial/StopSecondaryNode 12.93
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.84
172 TestMultiControlPlane/serial/RestartSecondaryNode 30.15
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.46
177 TestMultiControlPlane/serial/StopCluster 24.17
178 TestMultiControlPlane/serial/RestartCluster 71.08
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
180 TestMultiControlPlane/serial/AddSecondaryNode 81.03
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.09
185 TestJSONOutput/start/Command 80.36
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.84
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 69.42
211 TestKicCustomNetwork/use_default_bridge_network 39.82
212 TestKicExistingNetwork 39.45
213 TestKicCustomSubnet 38.48
214 TestKicStaticIP 33.05
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 72.7
219 TestMountStart/serial/StartWithMountFirst 9.44
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 8.95
222 TestMountStart/serial/VerifyMountSecond 0.35
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 7.76
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 135.75
231 TestMultiNode/serial/DeployApp2Nodes 5.25
232 TestMultiNode/serial/PingHostFrom2Pods 0.99
233 TestMultiNode/serial/AddNode 58.93
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.72
236 TestMultiNode/serial/CopyFile 10.53
237 TestMultiNode/serial/StopNode 2.44
238 TestMultiNode/serial/StartAfterStop 8.18
239 TestMultiNode/serial/RestartKeepsNodes 76.24
240 TestMultiNode/serial/DeleteNode 5.63
241 TestMultiNode/serial/StopMultiNode 24.39
242 TestMultiNode/serial/RestartMultiNode 52.05
243 TestMultiNode/serial/ValidateNameConflict 36.28
248 TestPreload 150.91
250 TestScheduledStopUnix 110.64
253 TestInsufficientStorage 14.14
254 TestRunningBinaryUpgrade 55.88
256 TestKubernetesUpgrade 343.43
257 TestMissingContainerUpgrade 120.07
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 46.79
261 TestNoKubernetes/serial/StartWithStopK8s 108.07
262 TestNoKubernetes/serial/Start 9.6
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
264 TestNoKubernetes/serial/ProfileList 1.34
265 TestNoKubernetes/serial/Stop 1.57
266 TestNoKubernetes/serial/StartNoArgs 8.2
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
268 TestStoppedBinaryUpgrade/Setup 2.81
269 TestStoppedBinaryUpgrade/Upgrade 55.33
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.34
279 TestPause/serial/Start 84.52
280 TestPause/serial/SecondStartNoReconfiguration 31.34
289 TestNetworkPlugins/group/false 5.1
294 TestStartStop/group/old-k8s-version/serial/FirstStart 63.89
295 TestStartStop/group/old-k8s-version/serial/DeployApp 11.39
297 TestStartStop/group/old-k8s-version/serial/Stop 12.01
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
299 TestStartStop/group/old-k8s-version/serial/SecondStart 50.59
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.13
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
305 TestStartStop/group/no-preload/serial/FirstStart 74.89
307 TestStartStop/group/embed-certs/serial/FirstStart 90.81
308 TestStartStop/group/no-preload/serial/DeployApp 9.34
310 TestStartStop/group/no-preload/serial/Stop 12.09
311 TestStartStop/group/embed-certs/serial/DeployApp 10.36
312 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
313 TestStartStop/group/no-preload/serial/SecondStart 51.57
315 TestStartStop/group/embed-certs/serial/Stop 12.32
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
317 TestStartStop/group/embed-certs/serial/SecondStart 51.04
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.26
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
329 TestStartStop/group/newest-cni/serial/FirstStart 41.53
330 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/Stop 1.35
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
334 TestStartStop/group/newest-cni/serial/SecondStart 15.28
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
339 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.46
340 TestNetworkPlugins/group/auto/Start 87.11
342 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.07
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
344 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.23
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
347 TestNetworkPlugins/group/auto/KubeletFlags 0.33
348 TestNetworkPlugins/group/auto/NetCatPod 11.42
349 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.33
351 TestNetworkPlugins/group/kindnet/Start 89.07
352 TestNetworkPlugins/group/auto/DNS 0.22
353 TestNetworkPlugins/group/auto/Localhost 0.23
354 TestNetworkPlugins/group/auto/HairPin 0.17
355 TestNetworkPlugins/group/calico/Start 68.09
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/calico/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
359 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
360 TestNetworkPlugins/group/calico/KubeletFlags 0.32
361 TestNetworkPlugins/group/calico/NetCatPod 10.28
362 TestNetworkPlugins/group/kindnet/DNS 0.19
363 TestNetworkPlugins/group/kindnet/Localhost 0.14
364 TestNetworkPlugins/group/kindnet/HairPin 0.18
365 TestNetworkPlugins/group/calico/DNS 0.17
366 TestNetworkPlugins/group/calico/Localhost 0.14
367 TestNetworkPlugins/group/calico/HairPin 0.14
368 TestNetworkPlugins/group/custom-flannel/Start 70.01
369 TestNetworkPlugins/group/enable-default-cni/Start 84.12
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.28
372 TestNetworkPlugins/group/custom-flannel/DNS 0.18
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
375 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
376 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.45
377 TestNetworkPlugins/group/flannel/Start 54.58
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
381 TestNetworkPlugins/group/bridge/Start 77.5
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.41
384 TestNetworkPlugins/group/flannel/NetCatPod 11.32
385 TestNetworkPlugins/group/flannel/DNS 0.16
386 TestNetworkPlugins/group/flannel/Localhost 0.13
387 TestNetworkPlugins/group/flannel/HairPin 0.13
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
389 TestNetworkPlugins/group/bridge/NetCatPod 10.27
390 TestNetworkPlugins/group/bridge/DNS 0.14
391 TestNetworkPlugins/group/bridge/Localhost 0.12
392 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (38.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-933367 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-933367 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (38.602925433s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (38.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1016 18:31:26.824539  290312 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1016 18:31:26.824620  290312 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-933367
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-933367: exit status 85 (82.29162ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-933367 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-933367 │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:30:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:30:48.270171  290318 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:30:48.270280  290318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:30:48.270292  290318 out.go:374] Setting ErrFile to fd 2...
	I1016 18:30:48.270298  290318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:30:48.270565  290318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	W1016 18:30:48.270701  290318 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21738-288457/.minikube/config/config.json: open /home/jenkins/minikube-integration/21738-288457/.minikube/config/config.json: no such file or directory
	I1016 18:30:48.271085  290318 out.go:368] Setting JSON to true
	I1016 18:30:48.271902  290318 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4378,"bootTime":1760635071,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 18:30:48.271979  290318 start.go:141] virtualization:  
	I1016 18:30:48.276315  290318 out.go:99] [download-only-933367] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1016 18:30:48.276508  290318 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball: no such file or directory
	I1016 18:30:48.276627  290318 notify.go:220] Checking for updates...
	I1016 18:30:48.279809  290318 out.go:171] MINIKUBE_LOCATION=21738
	I1016 18:30:48.282874  290318 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:30:48.285866  290318 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:30:48.288903  290318 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 18:30:48.291885  290318 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1016 18:30:48.297668  290318 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1016 18:30:48.297978  290318 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:30:48.331115  290318 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 18:30:48.331248  290318 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:30:48.388280  290318 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-16 18:30:48.37896187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:30:48.388388  290318 docker.go:318] overlay module found
	I1016 18:30:48.391390  290318 out.go:99] Using the docker driver based on user configuration
	I1016 18:30:48.391434  290318 start.go:305] selected driver: docker
	I1016 18:30:48.391441  290318 start.go:925] validating driver "docker" against <nil>
	I1016 18:30:48.391557  290318 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:30:48.448904  290318 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-16 18:30:48.43951735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:30:48.449059  290318 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 18:30:48.449405  290318 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1016 18:30:48.449560  290318 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1016 18:30:48.452686  290318 out.go:171] Using Docker driver with root privileges
	I1016 18:30:48.455598  290318 cni.go:84] Creating CNI manager for ""
	I1016 18:30:48.455675  290318 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:30:48.455689  290318 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1016 18:30:48.455771  290318 start.go:349] cluster config:
	{Name:download-only-933367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-933367 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:30:48.458759  290318 out.go:99] Starting "download-only-933367" primary control-plane node in "download-only-933367" cluster
	I1016 18:30:48.458781  290318 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:30:48.461630  290318 out.go:99] Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:30:48.461663  290318 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1016 18:30:48.461814  290318 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:30:48.477883  290318 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 to local cache
	I1016 18:30:48.478095  290318 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory
	I1016 18:30:48.478193  290318 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 to local cache
	I1016 18:30:48.534270  290318 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1016 18:30:48.534294  290318 cache.go:58] Caching tarball of preloaded images
	I1016 18:30:48.534452  290318 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1016 18:30:48.537791  290318 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1016 18:30:48.537821  290318 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1016 18:30:48.620775  290318 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1016 18:30:48.620950  290318 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1016 18:30:54.082524  290318 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 as a tarball
	
	
	* The control-plane node download-only-933367 host does not exist
	  To start a cluster, run: "minikube start -p download-only-933367"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-933367
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (38.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-932213 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-932213 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (38.549616732s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (38.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1016 18:32:05.822871  290312 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1016 18:32:05.822911  290312 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-932213
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-932213: exit status 85 (65.85337ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-933367 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-933367 │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ delete  │ -p download-only-933367                                                                                                                                                   │ download-only-933367 │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ start   │ -o=json --download-only -p download-only-932213 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-932213 │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:31:27
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:31:27.318837  290521 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:31:27.319029  290521 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:31:27.319069  290521 out.go:374] Setting ErrFile to fd 2...
	I1016 18:31:27.319082  290521 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:31:27.319403  290521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:31:27.319876  290521 out.go:368] Setting JSON to true
	I1016 18:31:27.320769  290521 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4417,"bootTime":1760635071,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 18:31:27.320843  290521 start.go:141] virtualization:  
	I1016 18:31:27.324173  290521 out.go:99] [download-only-932213] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 18:31:27.324370  290521 notify.go:220] Checking for updates...
	I1016 18:31:27.327407  290521 out.go:171] MINIKUBE_LOCATION=21738
	I1016 18:31:27.330294  290521 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:31:27.333267  290521 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:31:27.336230  290521 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 18:31:27.339157  290521 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1016 18:31:27.344934  290521 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1016 18:31:27.345267  290521 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:31:27.371809  290521 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 18:31:27.371930  290521 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:31:27.428732  290521 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-16 18:31:27.419350712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:31:27.428848  290521 docker.go:318] overlay module found
	I1016 18:31:27.431857  290521 out.go:99] Using the docker driver based on user configuration
	I1016 18:31:27.431908  290521 start.go:305] selected driver: docker
	I1016 18:31:27.431926  290521 start.go:925] validating driver "docker" against <nil>
	I1016 18:31:27.432043  290521 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:31:27.486046  290521 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-16 18:31:27.476464745 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:31:27.486204  290521 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 18:31:27.486497  290521 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1016 18:31:27.486689  290521 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1016 18:31:27.489865  290521 out.go:171] Using Docker driver with root privileges
	I1016 18:31:27.492728  290521 cni.go:84] Creating CNI manager for ""
	I1016 18:31:27.492809  290521 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:31:27.492822  290521 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1016 18:31:27.492898  290521 start.go:349] cluster config:
	{Name:download-only-932213 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-932213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:31:27.495758  290521 out.go:99] Starting "download-only-932213" primary control-plane node in "download-only-932213" cluster
	I1016 18:31:27.495791  290521 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:31:27.498555  290521 out.go:99] Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:31:27.498599  290521 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:31:27.498646  290521 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:31:27.514145  290521 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 to local cache
	I1016 18:31:27.514291  290521 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory
	I1016 18:31:27.514310  290521 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory, skipping pull
	I1016 18:31:27.514315  290521 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in cache, skipping pull
	I1016 18:31:27.514323  290521 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 as a tarball
	I1016 18:31:27.554834  290521 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1016 18:31:27.554860  290521 cache.go:58] Caching tarball of preloaded images
	I1016 18:31:27.555031  290521 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:31:27.558215  290521 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1016 18:31:27.558251  290521 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1016 18:31:27.657810  290521 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1016 18:31:27.657887  290521 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21738-288457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-932213 host does not exist
	  To start a cluster, run: "minikube start -p download-only-932213"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-932213
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
I1016 18:32:06.905558  290312 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-086561 --alsologtostderr --binary-mirror http://127.0.0.1:41065 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-086561" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-086561
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-303264
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-303264: exit status 85 (68.501549ms)

                                                
                                                
-- stdout --
	* Profile "addons-303264" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-303264"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-303264
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-303264: exit status 85 (66.979452ms)

                                                
                                                
-- stdout --
	* Profile "addons-303264" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-303264"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (179.75s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-303264 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-303264 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m59.747888242s)
--- PASS: TestAddons/Setup (179.75s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-303264 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-303264 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.78s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-303264 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-303264 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2399eb6b-0b70-4a46-acca-4929071138df] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2399eb6b-0b70-4a46-acca-4929071138df] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003672543s
addons_test.go:694: (dbg) Run:  kubectl --context addons-303264 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-303264 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-303264 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-303264 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.78s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.46s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-303264
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-303264: (12.153561236s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-303264
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-303264
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-303264
--- PASS: TestAddons/StoppedEnableDisable (12.46s)

                                                
                                    
x
+
TestCertOptions (40.45s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-853056 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-853056 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (37.556943153s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-853056 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-853056 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-853056 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-853056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-853056
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-853056: (2.127744098s)
--- PASS: TestCertOptions (40.45s)

                                                
                                    
x
+
TestCertExpiration (243.82s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-828182 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1016 19:37:24.359015  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-828182 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.750523439s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-828182 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-828182 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (20.930560465s)
helpers_test.go:175: Cleaning up "cert-expiration-828182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-828182
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-828182: (3.132373576s)
--- PASS: TestCertExpiration (243.82s)

                                                
                                    
x
+
TestForceSystemdFlag (50.43s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-766055 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-766055 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (47.243498261s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-766055 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-766055" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-766055
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-766055: (2.725033599s)
--- PASS: TestForceSystemdFlag (50.43s)

                                                
                                    
x
+
TestForceSystemdEnv (43.39s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-871877 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-871877 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.469454795s)
helpers_test.go:175: Cleaning up "force-systemd-env-871877" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-871877
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-871877: (2.922249319s)
--- PASS: TestForceSystemdEnv (43.39s)

                                                
                                    
x
+
TestErrorSpam/setup (33.47s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-117002 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-117002 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-117002 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-117002 --driver=docker  --container-runtime=crio: (33.472910051s)
--- PASS: TestErrorSpam/setup (33.47s)

                                                
                                    
x
+
TestErrorSpam/start (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 start --dry-run
--- PASS: TestErrorSpam/start (0.81s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (6.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 pause: exit status 80 (2.071157232s)

                                                
                                                
-- stdout --
	* Pausing node nospam-117002 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:39:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 pause: exit status 80 (2.23473664s)

                                                
                                                
-- stdout --
	* Pausing node nospam-117002 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:39:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 pause: exit status 80 (2.341755192s)

                                                
                                                
-- stdout --
	* Pausing node nospam-117002 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:39:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.15s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 unpause: exit status 80 (2.190325807s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-117002 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:39:14Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 unpause: exit status 80 (2.261490345s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-117002 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:39:16Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 unpause: exit status 80 (1.702052077s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-117002 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:39:17Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.15s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 stop: (1.317900713s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-117002 --log_dir /tmp/nospam-117002 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21738-288457/.minikube/files/etc/test/nested/copy/290312/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.42s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-703623 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1016 18:40:08.368084  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:40:08.374601  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:40:08.386051  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:40:08.407426  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:40:08.448884  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:40:08.530296  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:40:08.691770  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:40:09.013463  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:40:09.655366  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:40:10.937154  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:40:13.499849  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:40:18.621940  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:40:28.863510  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-703623 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m22.416891843s)
--- PASS: TestFunctional/serial/StartWithProxy (82.42s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.73s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1016 18:40:46.139571  290312 config.go:182] Loaded profile config "functional-703623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-703623 --alsologtostderr -v=8
E1016 18:40:49.345002  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-703623 --alsologtostderr -v=8: (27.724260139s)
functional_test.go:678: soft start took 27.728121254s for "functional-703623" cluster.
I1016 18:41:13.864154  290312 config.go:182] Loaded profile config "functional-703623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (27.73s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-703623 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-703623 cache add registry.k8s.io/pause:3.1: (1.202983094s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-703623 cache add registry.k8s.io/pause:3.3: (1.176286532s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-703623 cache add registry.k8s.io/pause:latest: (1.117065891s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-703623 /tmp/TestFunctionalserialCacheCmdcacheadd_local3844312342/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 cache add minikube-local-cache-test:functional-703623
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 cache delete minikube-local-cache-test:functional-703623
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-703623
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-703623 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (298.532945ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 kubectl -- --context functional-703623 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-703623 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (53.89s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-703623 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1016 18:41:30.306473  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-703623 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (53.885943179s)
functional_test.go:776: restart took 53.886046442s for "functional-703623" cluster.
I1016 18:42:15.154974  290312 config.go:182] Loaded profile config "functional-703623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (53.89s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-703623 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-703623 logs: (1.485745757s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 logs --file /tmp/TestFunctionalserialLogsFileCmd464785871/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-703623 logs --file /tmp/TestFunctionalserialLogsFileCmd464785871/001/logs.txt: (1.491736601s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.16s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-703623 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-703623
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-703623: exit status 115 (380.85311ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32057 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-703623 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.16s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-703623 config get cpus: exit status 14 (83.365318ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-703623 config get cpus: exit status 14 (77.643028ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-703623 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-703623 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 316478: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.78s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-703623 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-703623 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (206.742293ms)

                                                
                                                
-- stdout --
	* [functional-703623] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:52:51.440533  316231 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:52:51.440697  316231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:52:51.440708  316231 out.go:374] Setting ErrFile to fd 2...
	I1016 18:52:51.440713  316231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:52:51.440986  316231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:52:51.441434  316231 out.go:368] Setting JSON to false
	I1016 18:52:51.442355  316231 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5701,"bootTime":1760635071,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 18:52:51.442436  316231 start.go:141] virtualization:  
	I1016 18:52:51.445730  316231 out.go:179] * [functional-703623] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 18:52:51.449402  316231 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:52:51.449590  316231 notify.go:220] Checking for updates...
	I1016 18:52:51.455826  316231 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:52:51.458640  316231 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:52:51.461353  316231 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 18:52:51.464939  316231 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 18:52:51.467967  316231 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:52:51.471382  316231 config.go:182] Loaded profile config "functional-703623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:52:51.471966  316231 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:52:51.510550  316231 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 18:52:51.510701  316231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:52:51.572688  316231 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-16 18:52:51.562683849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:52:51.572795  316231 docker.go:318] overlay module found
	I1016 18:52:51.575974  316231 out.go:179] * Using the docker driver based on existing profile
	I1016 18:52:51.578774  316231 start.go:305] selected driver: docker
	I1016 18:52:51.578795  316231 start.go:925] validating driver "docker" against &{Name:functional-703623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-703623 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:52:51.578907  316231 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:52:51.582374  316231 out.go:203] 
	W1016 18:52:51.585299  316231 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1016 18:52:51.588222  316231 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-703623 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-703623 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-703623 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (219.501955ms)

                                                
                                                
-- stdout --
	* [functional-703623] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:52:51.227340  316184 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:52:51.227470  316184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:52:51.227478  316184 out.go:374] Setting ErrFile to fd 2...
	I1016 18:52:51.227483  316184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:52:51.229151  316184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:52:51.229666  316184 out.go:368] Setting JSON to false
	I1016 18:52:51.230630  316184 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5701,"bootTime":1760635071,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 18:52:51.230727  316184 start.go:141] virtualization:  
	I1016 18:52:51.235945  316184 out.go:179] * [functional-703623] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1016 18:52:51.238993  316184 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:52:51.239051  316184 notify.go:220] Checking for updates...
	I1016 18:52:51.242392  316184 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:52:51.245419  316184 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 18:52:51.248430  316184 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 18:52:51.251677  316184 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 18:52:51.254717  316184 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:52:51.258166  316184 config.go:182] Loaded profile config "functional-703623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:52:51.258715  316184 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:52:51.295291  316184 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 18:52:51.295424  316184 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:52:51.365275  316184 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-16 18:52:51.35499794 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:52:51.365397  316184 docker.go:318] overlay module found
	I1016 18:52:51.368644  316184 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1016 18:52:51.371506  316184 start.go:305] selected driver: docker
	I1016 18:52:51.371535  316184 start.go:925] validating driver "docker" against &{Name:functional-703623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-703623 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:52:51.371643  316184 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:52:51.375382  316184 out.go:203] 
	W1016 18:52:51.378275  316184 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1016 18:52:51.381172  316184 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (31.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [d85fd27e-6100-424d-b766-222e782c8a55] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005721302s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-703623 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-703623 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-703623 get pvc myclaim -o=json
I1016 18:42:30.442577  290312 retry.go:31] will retry after 1.968060042s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:acfb8a1c-4c59-46bd-be7c-05b4d60bf1ef ResourceVersion:729 Generation:0 CreationTimestamp:2025-10-16 18:42:30 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0x4001599100 VolumeMode:0x4001599110 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-703623 get pvc myclaim -o=json
I1016 18:42:32.490699  290312 retry.go:31] will retry after 4.482346455s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:acfb8a1c-4c59-46bd-be7c-05b4d60bf1ef ResourceVersion:729 Generation:0 CreationTimestamp:2025-10-16 18:42:30 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0x40016628f0 VolumeMode:0x4001662900 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-703623 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-703623 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e04b2cb0-bcc1-4feb-9b09-cf478c29d73f] Pending
helpers_test.go:352: "sp-pod" [e04b2cb0-bcc1-4feb-9b09-cf478c29d73f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [e04b2cb0-bcc1-4feb-9b09-cf478c29d73f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003791202s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-703623 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-703623 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-703623 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [9f5886a9-eda1-46ff-9f77-062bb06fe078] Pending
helpers_test.go:352: "sp-pod" [9f5886a9-eda1-46ff-9f77-062bb06fe078] Running
E1016 18:42:52.227938  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003857014s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-703623 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (31.49s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh -n functional-703623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 cp functional-703623:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd170038937/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh -n functional-703623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh -n functional-703623 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/290312/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh "sudo cat /etc/test/nested/copy/290312/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/290312.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh "sudo cat /etc/ssl/certs/290312.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/290312.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh "sudo cat /usr/share/ca-certificates/290312.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2903122.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh "sudo cat /etc/ssl/certs/2903122.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2903122.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh "sudo cat /usr/share/ca-certificates/2903122.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-703623 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-703623 ssh "sudo systemctl is-active docker": exit status 1 (727.272063ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-703623 ssh "sudo systemctl is-active containerd": exit status 1 (403.257811ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-703623 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-703623 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-703623 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-703623 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 312738: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-703623 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-703623 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [c9c30cb6-c9e5-4fd6-a030-bebefa8b9b30] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [c9c30cb6-c9e5-4fd6-a030-bebefa8b9b30] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003540938s
I1016 18:42:34.850839  290312 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-703623 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.87.177 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-703623 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "376.345031ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "51.737784ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "366.129082ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "56.130302ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-703623 /tmp/TestFunctionalparallelMountCmdany-port4223276222/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760640760005742034" to /tmp/TestFunctionalparallelMountCmdany-port4223276222/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760640760005742034" to /tmp/TestFunctionalparallelMountCmdany-port4223276222/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760640760005742034" to /tmp/TestFunctionalparallelMountCmdany-port4223276222/001/test-1760640760005742034
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 16 18:52 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 16 18:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 16 18:52 test-1760640760005742034
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh cat /mount-9p/test-1760640760005742034
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-703623 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [cf815c1c-2d26-4492-8142-8eb45063a445] Pending
helpers_test.go:352: "busybox-mount" [cf815c1c-2d26-4492-8142-8eb45063a445] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [cf815c1c-2d26-4492-8142-8eb45063a445] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [cf815c1c-2d26-4492-8142-8eb45063a445] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003802123s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-703623 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-703623 /tmp/TestFunctionalparallelMountCmdany-port4223276222/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-703623 /tmp/TestFunctionalparallelMountCmdspecific-port1359390389/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-703623 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (357.294271ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1016 18:52:46.742799  290312 retry.go:31] will retry after 514.105178ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-703623 /tmp/TestFunctionalparallelMountCmdspecific-port1359390389/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-703623 ssh "sudo umount -f /mount-9p": exit status 1 (289.218706ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-703623 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-703623 /tmp/TestFunctionalparallelMountCmdspecific-port1359390389/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-703623 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3500291676/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-703623 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3500291676/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-703623 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3500291676/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-703623 ssh "findmnt -T" /mount1: exit status 1 (576.500933ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1016 18:52:48.893313  290312 retry.go:31] will retry after 294.217725ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-703623 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-703623 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3500291676/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-703623 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3500291676/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-703623 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3500291676/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-703623 service list: (1.43207863s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 service list -o json
2025/10/16 18:52:59 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-703623 service list -o json: (1.539762666s)
functional_test.go:1504: Took "1.539854024s" to run "out/minikube-linux-arm64 -p functional-703623 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-703623 version -o=json --components: (1.322013035s)
--- PASS: TestFunctional/parallel/Version/components (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-703623 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-703623 image ls --format short --alsologtostderr:
I1016 18:53:07.992224  318711 out.go:360] Setting OutFile to fd 1 ...
I1016 18:53:07.992402  318711 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 18:53:07.992411  318711 out.go:374] Setting ErrFile to fd 2...
I1016 18:53:07.992415  318711 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 18:53:07.992677  318711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
I1016 18:53:07.993447  318711 config.go:182] Loaded profile config "functional-703623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 18:53:07.993578  318711 config.go:182] Loaded profile config "functional-703623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 18:53:07.994750  318711 cli_runner.go:164] Run: docker container inspect functional-703623 --format={{.State.Status}}
I1016 18:53:08.027271  318711 ssh_runner.go:195] Run: systemctl --version
I1016 18:53:08.027331  318711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-703623
I1016 18:53:08.060444  318711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/functional-703623/id_rsa Username:docker}
I1016 18:53:08.172660  318711 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-703623 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/library/nginx                 │ latest             │ e35ad067421cc │ 184MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-703623 image ls --format table --alsologtostderr:
I1016 18:53:09.745226  319086 out.go:360] Setting OutFile to fd 1 ...
I1016 18:53:09.745465  319086 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 18:53:09.745504  319086 out.go:374] Setting ErrFile to fd 2...
I1016 18:53:09.745524  319086 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 18:53:09.745841  319086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
I1016 18:53:09.746739  319086 config.go:182] Loaded profile config "functional-703623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 18:53:09.746981  319086 config.go:182] Loaded profile config "functional-703623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 18:53:09.747703  319086 cli_runner.go:164] Run: docker container inspect functional-703623 --format={{.State.Status}}
I1016 18:53:09.770789  319086 ssh_runner.go:195] Run: systemctl --version
I1016 18:53:09.770883  319086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-703623
I1016 18:53:09.788107  319086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/functional-703623/id_rsa Username:docker}
I1016 18:53:09.891991  319086 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-703623 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etc
d:3.6.4-0"],"size":"205987068"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-schedu
ler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"re
poTags":["docker.io/library/nginx:alpine"],"size":"54704654"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags"
:[],"size":"42263767"},{"id":"e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9","repoDigests":["docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6","docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a"],"repoTags":["docker.io/library/nginx:latest"],"size":"184136558"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e
14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-703623 image ls --format json --alsologtostderr:
I1016 18:53:09.500591  319050 out.go:360] Setting OutFile to fd 1 ...
I1016 18:53:09.500756  319050 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 18:53:09.500782  319050 out.go:374] Setting ErrFile to fd 2...
I1016 18:53:09.500793  319050 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 18:53:09.501473  319050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
I1016 18:53:09.502159  319050 config.go:182] Loaded profile config "functional-703623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 18:53:09.502327  319050 config.go:182] Loaded profile config "functional-703623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 18:53:09.502827  319050 cli_runner.go:164] Run: docker container inspect functional-703623 --format={{.State.Status}}
I1016 18:53:09.520998  319050 ssh_runner.go:195] Run: systemctl --version
I1016 18:53:09.521080  319050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-703623
I1016 18:53:09.546200  319050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/functional-703623/id_rsa Username:docker}
I1016 18:53:09.651785  319050 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-703623 image ls --format yaml --alsologtostderr:
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9
repoDigests:
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
- docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a
repoTags:
- docker.io/library/nginx:latest
size: "184136558"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-703623 image ls --format yaml --alsologtostderr:
I1016 18:53:09.262980  319014 out.go:360] Setting OutFile to fd 1 ...
I1016 18:53:09.263195  319014 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 18:53:09.263227  319014 out.go:374] Setting ErrFile to fd 2...
I1016 18:53:09.263248  319014 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 18:53:09.263644  319014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
I1016 18:53:09.264662  319014 config.go:182] Loaded profile config "functional-703623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 18:53:09.265235  319014 config.go:182] Loaded profile config "functional-703623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 18:53:09.265777  319014 cli_runner.go:164] Run: docker container inspect functional-703623 --format={{.State.Status}}
I1016 18:53:09.285988  319014 ssh_runner.go:195] Run: systemctl --version
I1016 18:53:09.286043  319014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-703623
I1016 18:53:09.304121  319014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/functional-703623/id_rsa Username:docker}
I1016 18:53:09.407771  319014 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-703623 ssh pgrep buildkitd: exit status 1 (349.770702ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 image build -t localhost/my-image:functional-703623 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-703623 image build -t localhost/my-image:functional-703623 testdata/build --alsologtostderr: (3.351780028s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-703623 image build -t localhost/my-image:functional-703623 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4e77408450d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-703623
--> 6725ecffee5
Successfully tagged localhost/my-image:functional-703623
6725ecffee578443a3d28521c3ed86cbf1720664f60d7962fb476eeb0b1f1225
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-703623 image build -t localhost/my-image:functional-703623 testdata/build --alsologtostderr:
I1016 18:53:08.788732  318919 out.go:360] Setting OutFile to fd 1 ...
I1016 18:53:08.789383  318919 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 18:53:08.789395  318919 out.go:374] Setting ErrFile to fd 2...
I1016 18:53:08.789400  318919 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 18:53:08.789900  318919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
I1016 18:53:08.791190  318919 config.go:182] Loaded profile config "functional-703623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 18:53:08.792043  318919 config.go:182] Loaded profile config "functional-703623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 18:53:08.792731  318919 cli_runner.go:164] Run: docker container inspect functional-703623 --format={{.State.Status}}
I1016 18:53:08.812070  318919 ssh_runner.go:195] Run: systemctl --version
I1016 18:53:08.812129  318919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-703623
I1016 18:53:08.832660  318919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/functional-703623/id_rsa Username:docker}
I1016 18:53:08.948362  318919 build_images.go:161] Building image from path: /tmp/build.1092883823.tar
I1016 18:53:08.948446  318919 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1016 18:53:08.958189  318919 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1092883823.tar
I1016 18:53:08.962121  318919 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1092883823.tar: stat -c "%s %y" /var/lib/minikube/build/build.1092883823.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1092883823.tar': No such file or directory
I1016 18:53:08.962150  318919 ssh_runner.go:362] scp /tmp/build.1092883823.tar --> /var/lib/minikube/build/build.1092883823.tar (3072 bytes)
I1016 18:53:08.985623  318919 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1092883823
I1016 18:53:08.994789  318919 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1092883823 -xf /var/lib/minikube/build/build.1092883823.tar
I1016 18:53:09.004485  318919 crio.go:315] Building image: /var/lib/minikube/build/build.1092883823
I1016 18:53:09.004612  318919 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-703623 /var/lib/minikube/build/build.1092883823 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1016 18:53:12.062743  318919 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-703623 /var/lib/minikube/build/build.1092883823 --cgroup-manager=cgroupfs: (3.058069859s)
I1016 18:53:12.062821  318919 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1092883823
I1016 18:53:12.071377  318919 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1092883823.tar
I1016 18:53:12.079454  318919 build_images.go:217] Built localhost/my-image:functional-703623 from /tmp/build.1092883823.tar
I1016 18:53:12.079487  318919 build_images.go:133] succeeded building to: functional-703623
I1016 18:53:12.079494  318919 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-703623
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 image rm kicbase/echo-server:functional-703623 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-703623 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-703623
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-703623
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-703623
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (172.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1016 18:55:08.362420  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-556988 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m51.94155548s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (172.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-556988 kubectl -- rollout status deployment/busybox: (4.743328151s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 kubectl -- exec busybox-7b57f96db7-8m2wv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 kubectl -- exec busybox-7b57f96db7-g6s82 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 kubectl -- exec busybox-7b57f96db7-zdc2h -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 kubectl -- exec busybox-7b57f96db7-8m2wv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 kubectl -- exec busybox-7b57f96db7-g6s82 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 kubectl -- exec busybox-7b57f96db7-zdc2h -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 kubectl -- exec busybox-7b57f96db7-8m2wv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 kubectl -- exec busybox-7b57f96db7-g6s82 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 kubectl -- exec busybox-7b57f96db7-zdc2h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 kubectl -- exec busybox-7b57f96db7-8m2wv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 kubectl -- exec busybox-7b57f96db7-8m2wv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 kubectl -- exec busybox-7b57f96db7-g6s82 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 kubectl -- exec busybox-7b57f96db7-g6s82 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 kubectl -- exec busybox-7b57f96db7-zdc2h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 kubectl -- exec busybox-7b57f96db7-zdc2h -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 node add --alsologtostderr -v 5
E1016 18:56:31.431657  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-556988 node add --alsologtostderr -v 5: (59.484705643s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-556988 status --alsologtostderr -v 5: (1.057177815s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-556988 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.074009556s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-556988 status --output json --alsologtostderr -v 5: (1.057855285s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 cp testdata/cp-test.txt ha-556988:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 cp ha-556988:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2002313520/001/cp-test_ha-556988.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 cp ha-556988:/home/docker/cp-test.txt ha-556988-m02:/home/docker/cp-test_ha-556988_ha-556988-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m02 "sudo cat /home/docker/cp-test_ha-556988_ha-556988-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 cp ha-556988:/home/docker/cp-test.txt ha-556988-m03:/home/docker/cp-test_ha-556988_ha-556988-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m03 "sudo cat /home/docker/cp-test_ha-556988_ha-556988-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 cp ha-556988:/home/docker/cp-test.txt ha-556988-m04:/home/docker/cp-test_ha-556988_ha-556988-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m04 "sudo cat /home/docker/cp-test_ha-556988_ha-556988-m04.txt"
E1016 18:57:24.359313  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:57:24.365661  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:57:24.377192  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:57:24.398688  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:57:24.440402  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:57:24.521837  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 cp testdata/cp-test.txt ha-556988-m02:/home/docker/cp-test.txt
E1016 18:57:24.683131  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m02 "sudo cat /home/docker/cp-test.txt"
E1016 18:57:25.005319  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 cp ha-556988-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2002313520/001/cp-test_ha-556988-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m02 "sudo cat /home/docker/cp-test.txt"
E1016 18:57:25.646838  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 cp ha-556988-m02:/home/docker/cp-test.txt ha-556988:/home/docker/cp-test_ha-556988-m02_ha-556988.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988 "sudo cat /home/docker/cp-test_ha-556988-m02_ha-556988.txt"
E1016 18:57:26.929976  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 cp ha-556988-m02:/home/docker/cp-test.txt ha-556988-m03:/home/docker/cp-test_ha-556988-m02_ha-556988-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m03 "sudo cat /home/docker/cp-test_ha-556988-m02_ha-556988-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 cp ha-556988-m02:/home/docker/cp-test.txt ha-556988-m04:/home/docker/cp-test_ha-556988-m02_ha-556988-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m04 "sudo cat /home/docker/cp-test_ha-556988-m02_ha-556988-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 cp testdata/cp-test.txt ha-556988-m03:/home/docker/cp-test.txt
E1016 18:57:29.492163  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 cp ha-556988-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2002313520/001/cp-test_ha-556988-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 cp ha-556988-m03:/home/docker/cp-test.txt ha-556988:/home/docker/cp-test_ha-556988-m03_ha-556988.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988 "sudo cat /home/docker/cp-test_ha-556988-m03_ha-556988.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 cp ha-556988-m03:/home/docker/cp-test.txt ha-556988-m02:/home/docker/cp-test_ha-556988-m03_ha-556988-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m02 "sudo cat /home/docker/cp-test_ha-556988-m03_ha-556988-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 cp ha-556988-m03:/home/docker/cp-test.txt ha-556988-m04:/home/docker/cp-test_ha-556988-m03_ha-556988-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m04 "sudo cat /home/docker/cp-test_ha-556988-m03_ha-556988-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 cp testdata/cp-test.txt ha-556988-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m04 "sudo cat /home/docker/cp-test.txt"
E1016 18:57:34.613484  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 cp ha-556988-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2002313520/001/cp-test_ha-556988-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 cp ha-556988-m04:/home/docker/cp-test.txt ha-556988:/home/docker/cp-test_ha-556988-m04_ha-556988.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988 "sudo cat /home/docker/cp-test_ha-556988-m04_ha-556988.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 cp ha-556988-m04:/home/docker/cp-test.txt ha-556988-m02:/home/docker/cp-test_ha-556988-m04_ha-556988-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m02 "sudo cat /home/docker/cp-test_ha-556988-m04_ha-556988-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 cp ha-556988-m04:/home/docker/cp-test.txt ha-556988-m03:/home/docker/cp-test_ha-556988-m04_ha-556988-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 ssh -n ha-556988-m03 "sudo cat /home/docker/cp-test_ha-556988-m04_ha-556988-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 node stop m02 --alsologtostderr -v 5
E1016 18:57:44.854854  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-556988 node stop m02 --alsologtostderr -v 5: (12.122675629s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-556988 status --alsologtostderr -v 5: exit status 7 (806.504038ms)

                                                
                                                
-- stdout --
	ha-556988
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-556988-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-556988-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-556988-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:57:51.027602  333955 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:57:51.027874  333955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:57:51.027901  333955 out.go:374] Setting ErrFile to fd 2...
	I1016 18:57:51.027906  333955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:57:51.028350  333955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 18:57:51.028705  333955 out.go:368] Setting JSON to false
	I1016 18:57:51.028759  333955 mustload.go:65] Loading cluster: ha-556988
	I1016 18:57:51.029184  333955 notify.go:220] Checking for updates...
	I1016 18:57:51.029453  333955 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:57:51.029476  333955 status.go:174] checking status of ha-556988 ...
	I1016 18:57:51.030303  333955 cli_runner.go:164] Run: docker container inspect ha-556988 --format={{.State.Status}}
	I1016 18:57:51.054756  333955 status.go:371] ha-556988 host status = "Running" (err=<nil>)
	I1016 18:57:51.054777  333955 host.go:66] Checking if "ha-556988" exists ...
	I1016 18:57:51.055218  333955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988
	I1016 18:57:51.091504  333955 host.go:66] Checking if "ha-556988" exists ...
	I1016 18:57:51.091949  333955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:57:51.092018  333955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988
	I1016 18:57:51.119843  333955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988/id_rsa Username:docker}
	I1016 18:57:51.231332  333955 ssh_runner.go:195] Run: systemctl --version
	I1016 18:57:51.238127  333955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:57:51.256005  333955 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:57:51.316002  333955 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-16 18:57:51.305254835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 18:57:51.316694  333955 kubeconfig.go:125] found "ha-556988" server: "https://192.168.49.254:8443"
	I1016 18:57:51.316736  333955 api_server.go:166] Checking apiserver status ...
	I1016 18:57:51.316780  333955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:57:51.329464  333955 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1230/cgroup
	I1016 18:57:51.338426  333955 api_server.go:182] apiserver freezer: "13:freezer:/docker/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/crio/crio-6d1f766611ccd51121a7524b51093ec3b40bb027090386ba7736cd97f52f9140"
	I1016 18:57:51.338508  333955 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ee539784e727b38f5176505a1950fda502d78c47d80b5f56ee686d9a8e7f0000/crio/crio-6d1f766611ccd51121a7524b51093ec3b40bb027090386ba7736cd97f52f9140/freezer.state
	I1016 18:57:51.346085  333955 api_server.go:204] freezer state: "THAWED"
	I1016 18:57:51.346114  333955 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1016 18:57:51.356122  333955 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1016 18:57:51.356154  333955 status.go:463] ha-556988 apiserver status = Running (err=<nil>)
	I1016 18:57:51.356166  333955 status.go:176] ha-556988 status: &{Name:ha-556988 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 18:57:51.356184  333955 status.go:174] checking status of ha-556988-m02 ...
	I1016 18:57:51.356506  333955 cli_runner.go:164] Run: docker container inspect ha-556988-m02 --format={{.State.Status}}
	I1016 18:57:51.373853  333955 status.go:371] ha-556988-m02 host status = "Stopped" (err=<nil>)
	I1016 18:57:51.373880  333955 status.go:384] host is not running, skipping remaining checks
	I1016 18:57:51.373886  333955 status.go:176] ha-556988-m02 status: &{Name:ha-556988-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 18:57:51.373906  333955 status.go:174] checking status of ha-556988-m03 ...
	I1016 18:57:51.374222  333955 cli_runner.go:164] Run: docker container inspect ha-556988-m03 --format={{.State.Status}}
	I1016 18:57:51.394173  333955 status.go:371] ha-556988-m03 host status = "Running" (err=<nil>)
	I1016 18:57:51.394215  333955 host.go:66] Checking if "ha-556988-m03" exists ...
	I1016 18:57:51.394607  333955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m03
	I1016 18:57:51.414209  333955 host.go:66] Checking if "ha-556988-m03" exists ...
	I1016 18:57:51.414759  333955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:57:51.414851  333955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m03
	I1016 18:57:51.434353  333955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m03/id_rsa Username:docker}
	I1016 18:57:51.534960  333955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:57:51.548470  333955 kubeconfig.go:125] found "ha-556988" server: "https://192.168.49.254:8443"
	I1016 18:57:51.548502  333955 api_server.go:166] Checking apiserver status ...
	I1016 18:57:51.548556  333955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:57:51.560744  333955 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	I1016 18:57:51.569618  333955 api_server.go:182] apiserver freezer: "13:freezer:/docker/fb73e96b1ad2ba0acaabaad78b31a2aecbd0586522ca8bc9e66bfdd6cbea19a5/crio/crio-c46890da0035cf4260d565aa390d03ef9b75ac87b155807e1e0e66c5948b2fc4"
	I1016 18:57:51.569692  333955 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fb73e96b1ad2ba0acaabaad78b31a2aecbd0586522ca8bc9e66bfdd6cbea19a5/crio/crio-c46890da0035cf4260d565aa390d03ef9b75ac87b155807e1e0e66c5948b2fc4/freezer.state
	I1016 18:57:51.577991  333955 api_server.go:204] freezer state: "THAWED"
	I1016 18:57:51.578020  333955 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1016 18:57:51.586137  333955 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1016 18:57:51.586164  333955 status.go:463] ha-556988-m03 apiserver status = Running (err=<nil>)
	I1016 18:57:51.586196  333955 status.go:176] ha-556988-m03 status: &{Name:ha-556988-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 18:57:51.586221  333955 status.go:174] checking status of ha-556988-m04 ...
	I1016 18:57:51.586527  333955 cli_runner.go:164] Run: docker container inspect ha-556988-m04 --format={{.State.Status}}
	I1016 18:57:51.604020  333955 status.go:371] ha-556988-m04 host status = "Running" (err=<nil>)
	I1016 18:57:51.604043  333955 host.go:66] Checking if "ha-556988-m04" exists ...
	I1016 18:57:51.604336  333955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-556988-m04
	I1016 18:57:51.622511  333955 host.go:66] Checking if "ha-556988-m04" exists ...
	I1016 18:57:51.622824  333955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:57:51.622869  333955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-556988-m04
	I1016 18:57:51.651518  333955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/ha-556988-m04/id_rsa Username:docker}
	I1016 18:57:51.754725  333955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:57:51.767879  333955 status.go:176] ha-556988-m04 status: &{Name:ha-556988-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (30.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 node start m02 --alsologtostderr -v 5
E1016 18:58:05.336716  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-556988 node start m02 --alsologtostderr -v 5: (28.575889458s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-556988 status --alsologtostderr -v 5: (1.418000155s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (30.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.461573231s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-556988 stop --alsologtostderr -v 5: (24.060177694s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-556988 status --alsologtostderr -v 5: exit status 7 (111.447143ms)

                                                
                                                
-- stdout --
	ha-556988
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-556988-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-556988-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 19:07:57.933873  345064 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:07:57.934053  345064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:07:57.934083  345064 out.go:374] Setting ErrFile to fd 2...
	I1016 19:07:57.934105  345064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:07:57.934392  345064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:07:57.934617  345064 out.go:368] Setting JSON to false
	I1016 19:07:57.934677  345064 mustload.go:65] Loading cluster: ha-556988
	I1016 19:07:57.935011  345064 notify.go:220] Checking for updates...
	I1016 19:07:57.935195  345064 config.go:182] Loaded profile config "ha-556988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:07:57.935231  345064 status.go:174] checking status of ha-556988 ...
	I1016 19:07:57.935793  345064 cli_runner.go:164] Run: docker container inspect ha-556988 --format={{.State.Status}}
	I1016 19:07:57.955588  345064 status.go:371] ha-556988 host status = "Stopped" (err=<nil>)
	I1016 19:07:57.955614  345064 status.go:384] host is not running, skipping remaining checks
	I1016 19:07:57.955622  345064 status.go:176] ha-556988 status: &{Name:ha-556988 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 19:07:57.955649  345064 status.go:174] checking status of ha-556988-m02 ...
	I1016 19:07:57.955955  345064 cli_runner.go:164] Run: docker container inspect ha-556988-m02 --format={{.State.Status}}
	I1016 19:07:57.976554  345064 status.go:371] ha-556988-m02 host status = "Stopped" (err=<nil>)
	I1016 19:07:57.976580  345064 status.go:384] host is not running, skipping remaining checks
	I1016 19:07:57.976587  345064 status.go:176] ha-556988-m02 status: &{Name:ha-556988-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 19:07:57.976607  345064 status.go:174] checking status of ha-556988-m04 ...
	I1016 19:07:57.976905  345064 cli_runner.go:164] Run: docker container inspect ha-556988-m04 --format={{.State.Status}}
	I1016 19:07:57.994120  345064 status.go:371] ha-556988-m04 host status = "Stopped" (err=<nil>)
	I1016 19:07:57.994145  345064 status.go:384] host is not running, skipping remaining checks
	I1016 19:07:57.994152  345064 status.go:176] ha-556988-m04 status: &{Name:ha-556988-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (71.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-556988 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m10.073059246s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (71.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 node add --control-plane --alsologtostderr -v 5
E1016 19:10:08.360248  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-556988 node add --control-plane --alsologtostderr -v 5: (1m19.931696786s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-556988 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-556988 status --alsologtostderr -v 5: (1.098080153s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.092541268s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.36s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-716057 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-716057 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m20.361382306s)
--- PASS: TestJSONOutput/start/Command (80.36s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-716057 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-716057 --output=json --user=testUser: (5.835390319s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-045711 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-045711 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (93.177156ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b0078ef8-56df-45b5-994b-7cfa34234c2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-045711] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"12791a35-6a0e-46e5-b9c7-0cf34c03a15a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21738"}}
	{"specversion":"1.0","id":"6980e2da-c946-47cf-b357-4b2ef8efb76a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"441d3434-4353-450c-976c-46d78038b511","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig"}}
	{"specversion":"1.0","id":"5387c3db-fcca-45cc-a242-ce5a40a284ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube"}}
	{"specversion":"1.0","id":"8ec307c0-ba28-4d1d-8ca9-7c5c95e0b55d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d6eb24e8-b051-4f9f-8e56-17b078eee04c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"84726ff7-1ee1-4f1f-95e3-406d8cb7ec07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-045711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-045711
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (69.42s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-501453 --network=
E1016 19:12:24.358985  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:13:11.433272  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-501453 --network=: (1m7.125120938s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-501453" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-501453
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-501453: (2.267498325s)
--- PASS: TestKicCustomNetwork/create_custom_network (69.42s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (39.82s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-949730 --network=bridge
E1016 19:13:47.425713  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-949730 --network=bridge: (37.686784171s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-949730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-949730
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-949730: (2.10138056s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (39.82s)

                                                
                                    
x
+
TestKicExistingNetwork (39.45s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1016 19:14:07.534132  290312 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1016 19:14:07.550112  290312 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1016 19:14:07.550196  290312 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1016 19:14:07.550212  290312 cli_runner.go:164] Run: docker network inspect existing-network
W1016 19:14:07.565858  290312 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1016 19:14:07.565893  290312 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1016 19:14:07.565915  290312 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1016 19:14:07.566021  290312 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1016 19:14:07.583172  290312 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7adcf17f22ba IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:ab:9e:ea:f5:d5} reservation:<nil>}
I1016 19:14:07.583463  290312 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a30f20}
I1016 19:14:07.583480  290312 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1016 19:14:07.583529  290312 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1016 19:14:07.644245  290312 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-266350 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-266350 --network=existing-network: (37.131961846s)
helpers_test.go:175: Cleaning up "existing-network-266350" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-266350
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-266350: (2.171731897s)
I1016 19:14:46.964468  290312 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (39.45s)

                                                
                                    
x
+
TestKicCustomSubnet (38.48s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-252516 --subnet=192.168.60.0/24
E1016 19:15:08.360124  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-252516 --subnet=192.168.60.0/24: (36.291458583s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-252516 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-252516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-252516
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-252516: (2.164621692s)
--- PASS: TestKicCustomSubnet (38.48s)

                                                
                                    
x
+
TestKicStaticIP (33.05s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-247536 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-247536 --static-ip=192.168.200.200: (30.59168269s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-247536 ip
helpers_test.go:175: Cleaning up "static-ip-247536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-247536
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-247536: (2.285343776s)
--- PASS: TestKicStaticIP (33.05s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (72.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-388625 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-388625 --driver=docker  --container-runtime=crio: (30.467170021s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-391267 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-391267 --driver=docker  --container-runtime=crio: (36.417919497s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-388625
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-391267
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-391267" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-391267
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-391267: (2.157640547s)
helpers_test.go:175: Cleaning up "first-388625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-388625
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-388625: (2.068619183s)
--- PASS: TestMinikubeProfile (72.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-428194 --memory=3072 --mount-string /tmp/TestMountStartserial3242819937/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-428194 --memory=3072 --mount-string /tmp/TestMountStartserial3242819937/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.435798788s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-428194 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-430081 --memory=3072 --mount-string /tmp/TestMountStartserial3242819937/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1016 19:17:24.359706  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-430081 --memory=3072 --mount-string /tmp/TestMountStartserial3242819937/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.951479758s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-430081 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-428194 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-428194 --alsologtostderr -v=5: (1.706732897s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-430081 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-430081
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-430081: (1.292092457s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.76s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-430081
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-430081: (6.761509821s)
--- PASS: TestMountStart/serial/RestartStopped (7.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-430081 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (135.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-786838 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-786838 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m15.211801115s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (135.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-786838 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-786838 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-786838 -- rollout status deployment/busybox: (3.373947121s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-786838 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-786838 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-786838 -- exec busybox-7b57f96db7-6bg88 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-786838 -- exec busybox-7b57f96db7-k4b8r -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-786838 -- exec busybox-7b57f96db7-6bg88 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-786838 -- exec busybox-7b57f96db7-k4b8r -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-786838 -- exec busybox-7b57f96db7-6bg88 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-786838 -- exec busybox-7b57f96db7-k4b8r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-786838 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-786838 -- exec busybox-7b57f96db7-6bg88 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-786838 -- exec busybox-7b57f96db7-6bg88 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-786838 -- exec busybox-7b57f96db7-k4b8r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-786838 -- exec busybox-7b57f96db7-k4b8r -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-786838 -v=5 --alsologtostderr
E1016 19:20:08.360055  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-786838 -v=5 --alsologtostderr: (58.230246477s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.93s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-786838 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 cp testdata/cp-test.txt multinode-786838:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 ssh -n multinode-786838 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 cp multinode-786838:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1773691938/001/cp-test_multinode-786838.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 ssh -n multinode-786838 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 cp multinode-786838:/home/docker/cp-test.txt multinode-786838-m02:/home/docker/cp-test_multinode-786838_multinode-786838-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 ssh -n multinode-786838 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 ssh -n multinode-786838-m02 "sudo cat /home/docker/cp-test_multinode-786838_multinode-786838-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 cp multinode-786838:/home/docker/cp-test.txt multinode-786838-m03:/home/docker/cp-test_multinode-786838_multinode-786838-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 ssh -n multinode-786838 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 ssh -n multinode-786838-m03 "sudo cat /home/docker/cp-test_multinode-786838_multinode-786838-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 cp testdata/cp-test.txt multinode-786838-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 ssh -n multinode-786838-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 cp multinode-786838-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1773691938/001/cp-test_multinode-786838-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 ssh -n multinode-786838-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 cp multinode-786838-m02:/home/docker/cp-test.txt multinode-786838:/home/docker/cp-test_multinode-786838-m02_multinode-786838.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 ssh -n multinode-786838-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 ssh -n multinode-786838 "sudo cat /home/docker/cp-test_multinode-786838-m02_multinode-786838.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 cp multinode-786838-m02:/home/docker/cp-test.txt multinode-786838-m03:/home/docker/cp-test_multinode-786838-m02_multinode-786838-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 ssh -n multinode-786838-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 ssh -n multinode-786838-m03 "sudo cat /home/docker/cp-test_multinode-786838-m02_multinode-786838-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 cp testdata/cp-test.txt multinode-786838-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 ssh -n multinode-786838-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 cp multinode-786838-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1773691938/001/cp-test_multinode-786838-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 ssh -n multinode-786838-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 cp multinode-786838-m03:/home/docker/cp-test.txt multinode-786838:/home/docker/cp-test_multinode-786838-m03_multinode-786838.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 ssh -n multinode-786838-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 ssh -n multinode-786838 "sudo cat /home/docker/cp-test_multinode-786838-m03_multinode-786838.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 cp multinode-786838-m03:/home/docker/cp-test.txt multinode-786838-m02:/home/docker/cp-test_multinode-786838-m03_multinode-786838-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 ssh -n multinode-786838-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 ssh -n multinode-786838-m02 "sudo cat /home/docker/cp-test_multinode-786838-m03_multinode-786838-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.53s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-786838 node stop m03: (1.31527352s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-786838 status: exit status 7 (567.897615ms)

                                                
                                                
-- stdout --
	multinode-786838
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-786838-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-786838-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-786838 status --alsologtostderr: exit status 7 (561.041639ms)

                                                
                                                
-- stdout --
	multinode-786838
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-786838-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-786838-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 19:21:17.680441  395819 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:21:17.680616  395819 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:21:17.680648  395819 out.go:374] Setting ErrFile to fd 2...
	I1016 19:21:17.680675  395819 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:21:17.680993  395819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:21:17.681303  395819 out.go:368] Setting JSON to false
	I1016 19:21:17.681382  395819 mustload.go:65] Loading cluster: multinode-786838
	I1016 19:21:17.681454  395819 notify.go:220] Checking for updates...
	I1016 19:21:17.681928  395819 config.go:182] Loaded profile config "multinode-786838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:21:17.681967  395819 status.go:174] checking status of multinode-786838 ...
	I1016 19:21:17.682559  395819 cli_runner.go:164] Run: docker container inspect multinode-786838 --format={{.State.Status}}
	I1016 19:21:17.704314  395819 status.go:371] multinode-786838 host status = "Running" (err=<nil>)
	I1016 19:21:17.704346  395819 host.go:66] Checking if "multinode-786838" exists ...
	I1016 19:21:17.704730  395819 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-786838
	I1016 19:21:17.726912  395819 host.go:66] Checking if "multinode-786838" exists ...
	I1016 19:21:17.727258  395819 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 19:21:17.727315  395819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-786838
	I1016 19:21:17.752031  395819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33268 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/multinode-786838/id_rsa Username:docker}
	I1016 19:21:17.854995  395819 ssh_runner.go:195] Run: systemctl --version
	I1016 19:21:17.861888  395819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:21:17.874733  395819 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:21:17.948704  395819 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-16 19:21:17.934215277 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:21:17.949324  395819 kubeconfig.go:125] found "multinode-786838" server: "https://192.168.67.2:8443"
	I1016 19:21:17.949362  395819 api_server.go:166] Checking apiserver status ...
	I1016 19:21:17.949415  395819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 19:21:17.960737  395819 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup
	I1016 19:21:17.968839  395819 api_server.go:182] apiserver freezer: "13:freezer:/docker/37940dfbc4dd044d6ad5150b170f582a7db79306b6a91d23e87030a76392fb57/crio/crio-4707f7c6949f140d1b403172b877af717ee64aaabf2859cb12728cdc337a8fe1"
	I1016 19:21:17.968907  395819 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/37940dfbc4dd044d6ad5150b170f582a7db79306b6a91d23e87030a76392fb57/crio/crio-4707f7c6949f140d1b403172b877af717ee64aaabf2859cb12728cdc337a8fe1/freezer.state
	I1016 19:21:17.976615  395819 api_server.go:204] freezer state: "THAWED"
	I1016 19:21:17.976646  395819 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1016 19:21:17.985845  395819 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1016 19:21:17.985880  395819 status.go:463] multinode-786838 apiserver status = Running (err=<nil>)
	I1016 19:21:17.985903  395819 status.go:176] multinode-786838 status: &{Name:multinode-786838 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 19:21:17.985927  395819 status.go:174] checking status of multinode-786838-m02 ...
	I1016 19:21:17.986249  395819 cli_runner.go:164] Run: docker container inspect multinode-786838-m02 --format={{.State.Status}}
	I1016 19:21:18.004412  395819 status.go:371] multinode-786838-m02 host status = "Running" (err=<nil>)
	I1016 19:21:18.004434  395819 host.go:66] Checking if "multinode-786838-m02" exists ...
	I1016 19:21:18.004755  395819 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-786838-m02
	I1016 19:21:18.023806  395819 host.go:66] Checking if "multinode-786838-m02" exists ...
	I1016 19:21:18.024140  395819 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 19:21:18.024180  395819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-786838-m02
	I1016 19:21:18.042345  395819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/21738-288457/.minikube/machines/multinode-786838-m02/id_rsa Username:docker}
	I1016 19:21:18.146946  395819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 19:21:18.160079  395819 status.go:176] multinode-786838-m02 status: &{Name:multinode-786838-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1016 19:21:18.160113  395819 status.go:174] checking status of multinode-786838-m03 ...
	I1016 19:21:18.160423  395819 cli_runner.go:164] Run: docker container inspect multinode-786838-m03 --format={{.State.Status}}
	I1016 19:21:18.178660  395819 status.go:371] multinode-786838-m03 host status = "Stopped" (err=<nil>)
	I1016 19:21:18.178689  395819 status.go:384] host is not running, skipping remaining checks
	I1016 19:21:18.178696  395819 status.go:176] multinode-786838-m03 status: &{Name:multinode-786838-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-786838 node start m03 -v=5 --alsologtostderr: (7.397315676s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-786838
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-786838
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-786838: (25.055251008s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-786838 --wait=true -v=5 --alsologtostderr
E1016 19:22:24.359440  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-786838 --wait=true -v=5 --alsologtostderr: (51.054059138s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-786838
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.24s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-786838 node delete m03: (4.945946361s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-786838 stop: (24.193711898s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-786838 status: exit status 7 (98.36056ms)

                                                
                                                
-- stdout --
	multinode-786838
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-786838-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-786838 status --alsologtostderr: exit status 7 (95.532308ms)

                                                
                                                
-- stdout --
	multinode-786838
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-786838-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 19:23:12.574573  403558 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:23:12.574743  403558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:23:12.574774  403558 out.go:374] Setting ErrFile to fd 2...
	I1016 19:23:12.574794  403558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:23:12.575095  403558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:23:12.575328  403558 out.go:368] Setting JSON to false
	I1016 19:23:12.575406  403558 mustload.go:65] Loading cluster: multinode-786838
	I1016 19:23:12.575477  403558 notify.go:220] Checking for updates...
	I1016 19:23:12.576417  403558 config.go:182] Loaded profile config "multinode-786838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:23:12.576463  403558 status.go:174] checking status of multinode-786838 ...
	I1016 19:23:12.577059  403558 cli_runner.go:164] Run: docker container inspect multinode-786838 --format={{.State.Status}}
	I1016 19:23:12.595680  403558 status.go:371] multinode-786838 host status = "Stopped" (err=<nil>)
	I1016 19:23:12.595700  403558 status.go:384] host is not running, skipping remaining checks
	I1016 19:23:12.595707  403558 status.go:176] multinode-786838 status: &{Name:multinode-786838 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 19:23:12.595731  403558 status.go:174] checking status of multinode-786838-m02 ...
	I1016 19:23:12.596056  403558 cli_runner.go:164] Run: docker container inspect multinode-786838-m02 --format={{.State.Status}}
	I1016 19:23:12.618860  403558 status.go:371] multinode-786838-m02 host status = "Stopped" (err=<nil>)
	I1016 19:23:12.618892  403558 status.go:384] host is not running, skipping remaining checks
	I1016 19:23:12.618906  403558 status.go:176] multinode-786838-m02 status: &{Name:multinode-786838-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-786838 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-786838 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.328726939s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-786838 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-786838
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-786838-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-786838-m02 --driver=docker  --container-runtime=crio: exit status 14 (97.633336ms)

                                                
                                                
-- stdout --
	* [multinode-786838-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-786838-m02' is duplicated with machine name 'multinode-786838-m02' in profile 'multinode-786838'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-786838-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-786838-m03 --driver=docker  --container-runtime=crio: (33.675014646s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-786838
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-786838: exit status 80 (341.229483ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-786838 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-786838-m03 already exists in multinode-786838-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-786838-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-786838-m03: (2.103580937s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.28s)

                                                
                                    
x
+
TestPreload (150.91s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-417367 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1016 19:25:08.360167  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-417367 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m2.466009184s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-417367 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-417367 image pull gcr.io/k8s-minikube/busybox: (2.156888865s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-417367
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-417367: (5.970631491s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-417367 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-417367 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m17.611202977s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-417367 image list
helpers_test.go:175: Cleaning up "test-preload-417367" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-417367
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-417367: (2.471080885s)
--- PASS: TestPreload (150.91s)

                                                
                                    
x
+
TestScheduledStopUnix (110.64s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-499711 --memory=3072 --driver=docker  --container-runtime=crio
E1016 19:27:24.359108  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-499711 --memory=3072 --driver=docker  --container-runtime=crio: (34.409815642s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-499711 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-499711 -n scheduled-stop-499711
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-499711 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1016 19:27:51.271373  290312 retry.go:31] will retry after 135.725µs: open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/scheduled-stop-499711/pid: no such file or directory
I1016 19:27:51.271931  290312 retry.go:31] will retry after 149.622µs: open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/scheduled-stop-499711/pid: no such file or directory
I1016 19:27:51.272511  290312 retry.go:31] will retry after 168.87µs: open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/scheduled-stop-499711/pid: no such file or directory
I1016 19:27:51.273715  290312 retry.go:31] will retry after 314.22µs: open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/scheduled-stop-499711/pid: no such file or directory
I1016 19:27:51.274848  290312 retry.go:31] will retry after 742.28µs: open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/scheduled-stop-499711/pid: no such file or directory
I1016 19:27:51.275992  290312 retry.go:31] will retry after 847.967µs: open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/scheduled-stop-499711/pid: no such file or directory
I1016 19:27:51.277127  290312 retry.go:31] will retry after 1.57532ms: open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/scheduled-stop-499711/pid: no such file or directory
I1016 19:27:51.279657  290312 retry.go:31] will retry after 2.088414ms: open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/scheduled-stop-499711/pid: no such file or directory
I1016 19:27:51.282867  290312 retry.go:31] will retry after 3.625538ms: open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/scheduled-stop-499711/pid: no such file or directory
I1016 19:27:51.287133  290312 retry.go:31] will retry after 5.274177ms: open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/scheduled-stop-499711/pid: no such file or directory
I1016 19:27:51.293391  290312 retry.go:31] will retry after 7.751372ms: open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/scheduled-stop-499711/pid: no such file or directory
I1016 19:27:51.301624  290312 retry.go:31] will retry after 11.52795ms: open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/scheduled-stop-499711/pid: no such file or directory
I1016 19:27:51.313854  290312 retry.go:31] will retry after 12.448574ms: open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/scheduled-stop-499711/pid: no such file or directory
I1016 19:27:51.327029  290312 retry.go:31] will retry after 26.327117ms: open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/scheduled-stop-499711/pid: no such file or directory
I1016 19:27:51.354290  290312 retry.go:31] will retry after 15.934396ms: open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/scheduled-stop-499711/pid: no such file or directory
I1016 19:27:51.370870  290312 retry.go:31] will retry after 33.464057ms: open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/scheduled-stop-499711/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-499711 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-499711 -n scheduled-stop-499711
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-499711
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-499711 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-499711
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-499711: exit status 7 (73.423569ms)

                                                
                                                
-- stdout --
	scheduled-stop-499711
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-499711 -n scheduled-stop-499711
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-499711 -n scheduled-stop-499711: exit status 7 (73.787178ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-499711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-499711
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-499711: (4.611254808s)
--- PASS: TestScheduledStopUnix (110.64s)

                                                
                                    
x
+
TestInsufficientStorage (14.14s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-400214 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-400214 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.523079443s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"206b9967-2c33-4913-834a-675c6d3272f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-400214] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a0c10fc4-aef2-4d3d-9603-1ca24496b351","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21738"}}
	{"specversion":"1.0","id":"7ec4153b-44bf-4de1-9177-ce85adc714b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5986d2de-f34f-479b-82f9-3da9c2e298bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig"}}
	{"specversion":"1.0","id":"9835fcb5-ab3f-49a7-8a28-f2040aeaa21d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube"}}
	{"specversion":"1.0","id":"821073fd-2f50-42ea-87d5-15889d688846","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"379947cc-8aef-4ad5-a81d-f781ac359cee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9d0df8be-9498-4bc5-98e8-2ad44cdf9792","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f6dee2e3-a795-4865-bf65-587074f1fa9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"99362c9a-f409-43b6-a9a6-140f29f0af80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b97110ac-e5aa-406f-b7a6-04c85d9c789f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7718ac06-a519-49e4-b773-e963541fb216","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-400214\" primary control-plane node in \"insufficient-storage-400214\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"68f0c133-f850-49e8-aa47-b38d6477a7dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760363564-21724 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"470be0d5-8cef-48ad-a4d2-e6ef58bdd447","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"05eebd5e-85d6-4881-83c3-cf88d0b03395","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-400214 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-400214 --output=json --layout=cluster: exit status 7 (324.936236ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-400214","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-400214","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1016 19:29:18.812419  419729 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-400214" does not appear in /home/jenkins/minikube-integration/21738-288457/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-400214 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-400214 --output=json --layout=cluster: exit status 7 (323.194226ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-400214","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-400214","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1016 19:29:19.135847  419797 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-400214" does not appear in /home/jenkins/minikube-integration/21738-288457/kubeconfig
	E1016 19:29:19.146264  419797 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/insufficient-storage-400214/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-400214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-400214
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-400214: (1.969421136s)
--- PASS: TestInsufficientStorage (14.14s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (55.88s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.5029006 start -p running-upgrade-779500 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.5029006 start -p running-upgrade-779500 --memory=3072 --vm-driver=docker  --container-runtime=crio: (32.302569826s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-779500 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-779500 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.271756755s)
helpers_test.go:175: Cleaning up "running-upgrade-779500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-779500
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-779500: (2.426651631s)
--- PASS: TestRunningBinaryUpgrade (55.88s)

                                                
                                    
x
+
TestKubernetesUpgrade (343.43s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-627378 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-627378 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.152223663s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-627378
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-627378: (1.700947596s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-627378 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-627378 status --format={{.Host}}: exit status 7 (163.809015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-627378 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-627378 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m36.330949038s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-627378 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-627378 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-627378 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (139.705336ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-627378] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-627378
	    minikube start -p kubernetes-upgrade-627378 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6273782 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-627378 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-627378 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-627378 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.207711172s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-627378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-627378
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-627378: (2.619080035s)
--- PASS: TestKubernetesUpgrade (343.43s)

                                                
                                    
x
+
TestMissingContainerUpgrade (120.07s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3743704848 start -p missing-upgrade-153120 --memory=3072 --driver=docker  --container-runtime=crio
E1016 19:29:51.434566  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3743704848 start -p missing-upgrade-153120 --memory=3072 --driver=docker  --container-runtime=crio: (1m1.847067176s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-153120
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-153120
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-153120 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-153120 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.175946458s)
helpers_test.go:175: Cleaning up "missing-upgrade-153120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-153120
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-153120: (2.071344396s)
--- PASS: TestMissingContainerUpgrade (120.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-204009 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-204009 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (98.852377ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-204009] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (46.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-204009 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-204009 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (46.307133691s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-204009 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (46.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (108.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-204009 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1016 19:30:08.361377  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:30:27.429287  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-204009 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m45.223021367s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-204009 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-204009 status -o json: exit status 2 (361.446521ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-204009","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-204009
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-204009: (2.483283866s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (108.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-204009 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-204009 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.603997767s)
--- PASS: TestNoKubernetes/serial/Start (9.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-204009 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-204009 "sudo systemctl is-active --quiet service kubelet": exit status 1 (322.655093ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-204009
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-204009: (1.572601664s)
--- PASS: TestNoKubernetes/serial/Stop (1.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-204009 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-204009 --driver=docker  --container-runtime=crio: (8.19850192s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-204009 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-204009 "sudo systemctl is-active --quiet service kubelet": exit status 1 (373.445656ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (55.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2786722924 start -p stopped-upgrade-284470 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1016 19:32:24.358993  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2786722924 start -p stopped-upgrade-284470 --memory=3072 --vm-driver=docker  --container-runtime=crio: (34.32711333s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2786722924 -p stopped-upgrade-284470 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2786722924 -p stopped-upgrade-284470 stop: (1.225629906s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-284470 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-284470 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.766236111s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (55.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-284470
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-284470: (1.336085428s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.34s)

                                                
                                    
x
+
TestPause/serial/Start (84.52s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-870778 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1016 19:35:08.359885  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-870778 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m24.522854686s)
--- PASS: TestPause/serial/Start (84.52s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.34s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-870778 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-870778 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.322568236s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-078761 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-078761 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (302.564479ms)

                                                
                                                
-- stdout --
	* [false-078761] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 19:37:10.133275  457970 out.go:360] Setting OutFile to fd 1 ...
	I1016 19:37:10.133519  457970 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:37:10.133542  457970 out.go:374] Setting ErrFile to fd 2...
	I1016 19:37:10.133562  457970 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 19:37:10.133957  457970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-288457/.minikube/bin
	I1016 19:37:10.134548  457970 out.go:368] Setting JSON to false
	I1016 19:37:10.135757  457970 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8360,"bootTime":1760635071,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1016 19:37:10.135859  457970 start.go:141] virtualization:  
	I1016 19:37:10.139642  457970 out.go:179] * [false-078761] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1016 19:37:10.142921  457970 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 19:37:10.142998  457970 notify.go:220] Checking for updates...
	I1016 19:37:10.147134  457970 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 19:37:10.150629  457970 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-288457/kubeconfig
	I1016 19:37:10.153697  457970 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-288457/.minikube
	I1016 19:37:10.156506  457970 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1016 19:37:10.160147  457970 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 19:37:10.163636  457970 config.go:182] Loaded profile config "force-systemd-flag-766055": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 19:37:10.163823  457970 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 19:37:10.227514  457970 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1016 19:37:10.227714  457970 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 19:37:10.313032  457970 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-16 19:37:10.301229561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1016 19:37:10.313174  457970 docker.go:318] overlay module found
	I1016 19:37:10.316606  457970 out.go:179] * Using the docker driver based on user configuration
	I1016 19:37:10.319530  457970 start.go:305] selected driver: docker
	I1016 19:37:10.319562  457970 start.go:925] validating driver "docker" against <nil>
	I1016 19:37:10.319580  457970 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 19:37:10.324266  457970 out.go:203] 
	W1016 19:37:10.327414  457970 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1016 19:37:10.330340  457970 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-078761 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-078761

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-078761

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-078761

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-078761

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-078761

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-078761

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-078761

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-078761

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-078761

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-078761

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-078761

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-078761" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-078761" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21738-288457/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 16 Oct 2025 19:37:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-766055
contexts:
- context:
cluster: force-systemd-flag-766055
extensions:
- extension:
last-update: Thu, 16 Oct 2025 19:37:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: force-systemd-flag-766055
name: force-systemd-flag-766055
current-context: force-systemd-flag-766055
kind: Config
preferences: {}
users:
- name: force-systemd-flag-766055
user:
client-certificate: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/force-systemd-flag-766055/client.crt
client-key: /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/force-systemd-flag-766055/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-078761

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078761"

                                                
                                                
----------------------- debugLogs end: false-078761 [took: 4.532534795s] --------------------------------
helpers_test.go:175: Cleaning up "false-078761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-078761
--- PASS: TestNetworkPlugins/group/false (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (63.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-663330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-663330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m3.890828757s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (63.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-663330 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [78750ccf-b912-4d16-9de5-1a8f1089eeb8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [78750ccf-b912-4d16-9de5-1a8f1089eeb8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004263925s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-663330 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-663330 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-663330 --alsologtostderr -v=3: (12.00981548s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-663330 -n old-k8s-version-663330
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-663330 -n old-k8s-version-663330: exit status 7 (77.500248ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-663330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-663330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1016 19:40:08.360597  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-663330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.165145736s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-663330 -n old-k8s-version-663330
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8z9qd" [c01af607-d3e2-43d1-a893-02a2a8aabdeb] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003564265s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8z9qd" [c01af607-d3e2-43d1-a893-02a2a8aabdeb] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003933265s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-663330 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-663330 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-225696 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-225696 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m14.888567617s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (90.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-751669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1016 19:42:24.358983  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-751669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m30.809375689s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (90.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-225696 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5e3658a2-5c39-4bba-8665-1c1d32931f47] Pending
helpers_test.go:352: "busybox" [5e3658a2-5c39-4bba-8665-1c1d32931f47] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5e3658a2-5c39-4bba-8665-1c1d32931f47] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005649684s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-225696 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-225696 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-225696 --alsologtostderr -v=3: (12.090359866s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-751669 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9c91d438-a5f2-4b5c-9b0b-7c64de9a9e22] Pending
helpers_test.go:352: "busybox" [9c91d438-a5f2-4b5c-9b0b-7c64de9a9e22] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9c91d438-a5f2-4b5c-9b0b-7c64de9a9e22] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.006761986s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-751669 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-225696 -n no-preload-225696
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-225696 -n no-preload-225696: exit status 7 (87.290942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-225696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-225696 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-225696 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.194387194s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-225696 -n no-preload-225696
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-751669 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-751669 --alsologtostderr -v=3: (12.323122388s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-751669 -n embed-certs-751669
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-751669 -n embed-certs-751669: exit status 7 (98.519538ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-751669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-751669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-751669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.688832154s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-751669 -n embed-certs-751669
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d6pcj" [39337081-20c2-4f59-8f6e-6a3970082cc2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003632997s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d6pcj" [39337081-20c2-4f59-8f6e-6a3970082cc2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003836374s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-225696 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-225696 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m6s27" [9596d11f-85b7-4ce4-b23f-262ed61f7dca] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00280206s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-850436 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-850436 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m26.26261419s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m6s27" [9596d11f-85b7-4ce4-b23f-262ed61f7dca] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002873901s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-751669 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-751669 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-408495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1016 19:44:41.870225  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:44:41.876572  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:44:41.887831  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:44:41.909145  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:44:41.950442  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:44:42.036083  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:44:42.198364  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:44:42.520466  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:44:43.162293  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:44:44.444009  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:44:47.006004  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:44:52.127324  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:45:02.369255  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:45:08.359906  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-408495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (41.525011865s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-408495 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-408495 --alsologtostderr -v=3: (1.348098183s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-408495 -n newest-cni-408495
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-408495 -n newest-cni-408495: exit status 7 (72.327252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-408495 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-408495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1016 19:45:22.850615  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-408495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (14.792391453s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-408495 -n newest-cni-408495
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-408495 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-850436 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a85c2c7f-3f8e-42da-8972-737f3f75d285] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a85c2c7f-3f8e-42da-8972-737f3f75d285] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003186107s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-850436 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-078761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-078761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m27.107442544s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-850436 --alsologtostderr -v=3
E1016 19:46:03.812057  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-850436 --alsologtostderr -v=3: (12.068420924s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-850436 -n default-k8s-diff-port-850436
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-850436 -n default-k8s-diff-port-850436: exit status 7 (106.563761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-850436 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-850436 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1016 19:46:31.436610  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-850436 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.879514782s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-850436 -n default-k8s-diff-port-850436
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ng9x9" [a086c25f-aa8c-4925-b778-32f4312b58da] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002736049s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ng9x9" [a086c25f-aa8c-4925-b778-32f4312b58da] Running
E1016 19:47:07.430586  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004313534s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-850436 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
I1016 19:47:12.210549  290312 config.go:182] Loaded profile config "auto-078761": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-078761 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-078761 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pdgnk" [26d15daf-d543-493b-9be3-2ed7ff572706] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pdgnk" [26d15daf-d543-493b-9be3-2ed7ff572706] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004386671s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-850436 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (89.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-078761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-078761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m29.072085237s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (89.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-078761 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-078761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-078761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-078761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1016 19:47:58.704217  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:48:19.186315  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-078761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m8.092119939s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-nplsj" [67896de8-eaaa-4be8-9080-e4f7a63925a1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003980604s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-h64jn" [8ff0e56e-f820-4ce4-bf5e-1447c22b194d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004505454s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-078761 "pgrep -a kubelet"
I1016 19:48:58.217902  290312 config.go:182] Loaded profile config "kindnet-078761": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-078761 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t8krw" [010a06b7-98d6-4b30-98c0-1e9ff60d9822] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1016 19:49:00.147606  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-t8krw" [010a06b7-98d6-4b30-98c0-1e9ff60d9822] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005030789s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-078761 "pgrep -a kubelet"
I1016 19:49:03.206280  290312 config.go:182] Loaded profile config "calico-078761": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-078761 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2dzmr" [478cd85f-f82c-436c-a197-397c0328064a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2dzmr" [478cd85f-f82c-436c-a197-397c0328064a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004172651s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-078761 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-078761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-078761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-078761 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-078761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-078761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-078761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-078761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m10.011904744s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-078761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1016 19:50:08.359944  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/addons-303264/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:50:09.576413  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/old-k8s-version-663330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:50:22.085790  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:50:41.383078  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:50:41.389442  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:50:41.400726  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:50:41.422178  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:50:41.463565  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:50:41.544938  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:50:41.706312  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:50:42.028118  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:50:42.670212  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:50:43.951604  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-078761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m24.122589727s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-078761 "pgrep -a kubelet"
I1016 19:50:45.614637  290312 config.go:182] Loaded profile config "custom-flannel-078761": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-078761 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5wlkc" [60395921-017d-4ed6-892e-7743ea61c97f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1016 19:50:46.513401  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-5wlkc" [60395921-017d-4ed6-892e-7743ea61c97f] Running
E1016 19:50:51.634988  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003168491s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-078761 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-078761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-078761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-078761 "pgrep -a kubelet"
I1016 19:51:07.351194  290312 config.go:182] Loaded profile config "enable-default-cni-078761": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-078761 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qkm6c" [e669d28f-45b7-4b6a-888d-95a26cfbe2a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qkm6c" [e669d28f-45b7-4b6a-888d-95a26cfbe2a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.00444689s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-078761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-078761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (54.579772822s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-078761 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-078761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-078761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-078761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1016 19:52:03.320331  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/default-k8s-diff-port-850436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:52:12.584053  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/auto-078761/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:52:12.590454  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/auto-078761/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:52:12.601799  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/auto-078761/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:52:12.623119  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/auto-078761/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:52:12.664466  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/auto-078761/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:52:12.746302  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/auto-078761/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:52:12.907775  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/auto-078761/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:52:13.229070  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/auto-078761/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-078761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m17.499507145s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-gd6kt" [8def046a-7d73-4e2e-b09c-75d0ff129fb8] Running
E1016 19:52:13.870905  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/auto-078761/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:52:15.152671  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/auto-078761/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:52:17.715237  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/auto-078761/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004551636s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-078761 "pgrep -a kubelet"
I1016 19:52:20.216762  290312 config.go:182] Loaded profile config "flannel-078761": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-078761 replace --force -f testdata/netcat-deployment.yaml
I1016 19:52:20.532840  290312 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4l7xd" [976ffa7d-9a4a-4db1-8e75-4c3acc2a7c08] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1016 19:52:22.837357  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/auto-078761/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 19:52:24.359496  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/functional-703623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-4l7xd" [976ffa7d-9a4a-4db1-8e75-4c3acc2a7c08] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003803026s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-078761 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-078761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-078761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-078761 "pgrep -a kubelet"
I1016 19:53:02.380263  290312 config.go:182] Loaded profile config "bridge-078761": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-078761 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tpzwb" [557ae728-a87c-436a-82de-fee482cfa87b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1016 19:53:05.928071  290312 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-288457/.minikube/profiles/no-preload-225696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-tpzwb" [557ae728-a87c-436a-82de-fee482cfa87b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004018119s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-078761 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-078761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-078761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-790969 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-790969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-790969
--- SKIP: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-031282" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-031282
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-078761 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-078761

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-078761

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-078761

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-078761

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-078761

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-078761

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-078761

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-078761

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-078761

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-078761

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-078761

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-078761" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-078761" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-078761

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078761"

                                                
                                                
----------------------- debugLogs end: kubenet-078761 [took: 5.149357466s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-078761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-078761
--- SKIP: TestNetworkPlugins/group/kubenet (5.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-078761 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-078761

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-078761

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-078761

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-078761

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-078761

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-078761

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-078761

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-078761

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-078761

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-078761

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-078761

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-078761" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-078761

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-078761

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-078761

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-078761

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-078761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-078761" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-078761

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-078761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078761"

                                                
                                                
----------------------- debugLogs end: cilium-078761 [took: 5.796374199s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-078761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-078761
--- SKIP: TestNetworkPlugins/group/cilium (6.12s)

                                                
                                    
Copied to clipboard